Lusting for fullframe (cont'd)

In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
 
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.

The higher dynamic range of FF sensor is only significant if we are looking at an image at the1:1 size or capable to see all the resolved details. But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range. For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image. From my experience 4Mpx images printed on the A5-size paper are hardly distinguished from the 16Mpx ones in terms of the visible detailes. The properly downsampled photo looks better if printed at proper size.

The proper downsampling can also be important in future, when FF-sensors with huge amount of pixels (~80 Mpx) will be typical.

I am sure that m43 system has the Future.
 
In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
Almost anything is debatable. In my own experience, denoising before downsampling generally produces better results than doing it in the reverse order. This is particularly true of the blotchy, smoothed colour noise For instance, same shot, same NR, same resampling, same post-resampling sharpening.



4d26703b6e5f4ff982944b545b5f2beb.jpg



ab416d03f7514cf19e5ee7678c9b1485.jpg

Knickerhawk's views clearly differ, and we are discussing, as we do.

--
Ride easy, William.
Bob
 
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.
Then it is your criteria that is the problem. Try the new Nikkor S 35 or S 50 . . . - both sharp across the frame at 1.8.
But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range.
Umm . . no. Downsampling may reduce apparent noise but dynamic range is a property of the sensor independent of resolution.
For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image.
?

Honestly these threads are somewhat bizarre altogether. When is the last time you saw a thread on the Nikon FX thread arguing that FX was just as good as M43? Just sayin . . .
 
Last edited:
There is still viability in the M43 platform. but the reason why full frame gets the most exposure nowadays is because people are buying full frame and because all those who bought full frame during the Christmas and Black Friday sales were lusted into get in. They all had smaller formats in the past, as it was the only format that was affordable.

I'm in Canada and in boxing day and before Christmas, the Sony A7II kit was sold out. Completely sold out; cleaned out! There were a lot of Rebel packaged boxes that went unsold! This was a different scene as usually the Rebel packaged boxes sold out, as it was the case for several years! I had never seen that happened. Can't say the same with the E-M5 II nor the E-M1 II; there are still plenty of stock goes to show that full frame is hot. Technically though, MFT is still a capable platform and so it APS-C, but the buying public had finally voted to embrace full frame just like when the buying public embraced VHS video rather than Beta.

It really does not matter how you slice and dice it and try to spin it. People buy what they feel they want; not by any logical means.
 
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.

The higher dynamic range of FF sensor is only significant if we are looking at an image at the1:1 size or capable to see all the resolved details. But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range. For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image. From my experience 4Mpx images printed on the A5-size paper are hardly distinguished from the 16Mpx ones in terms of the visible detailes. The properly downsampled photo looks better if printed at proper size.

The proper downsampling can also be important in future, when FF-sensors with huge amount of pixels (~80 Mpx) will be typical.

I am sure that m43 system has the Future.
Sorry; but what is this nonsense?!? Downsampling my 16MP to 4MP so I can print like a full frame. Show me a great 4MP image downsampled MFT file of a 30x40 or 40x60 print that will rival a full frame? You can not increase dynamic range and tonal range with downsampling. All you are doing is creating a physical illusion of extended dynamic range, because the image is small. When you print big; and I have done that many times, I can tell you no one is printing 4MP to get gorgeous 30x40 prints. Besides, why do people need a D850 or EOS 5DSR?!? For that 45MP and 50MP to print beyond 40x60"!

The higher dynamic range from full frame is only significant when printing really really big. The difference will begin to show; guess what from 30x40 and up. Shows up and is noticeable at 40x60". Which is why professional galleries only accept 45-50MP full frame and medium format files! They also sell prints starting at $10,000 as well. I used to have a client who runs a gallery at Granville Island and sell those 40x60" and up and you can tell a difference between my 16MP E-P5 vs her Nikon D810 and if I am forced to print that big. But I don't.

Isn't A5 prints; like 5x8"? You don't need full frame for that. In fact,, I had once printed a 16x20" print taken with my E-P5 and shown to the store where they had a couple of professional photogs working there. All thought I shot it with a Nikon D800!

But sorry, your analysis is your personal opinion..
 
Last edited:
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.

The higher dynamic range of FF sensor is only significant if we are looking at an image at the1:1 size or capable to see all the resolved details. But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range. For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image. From my experience 4Mpx images printed on the A5-size paper are hardly distinguished from the 16Mpx ones in terms of the visible detailes. The properly downsampled photo looks better if printed at proper size.

The proper downsampling can also be important in future, when FF-sensors with huge amount of pixels (~80 Mpx) will be typical.

I am sure that m43 system has the Future.
Sorry; but what is this nonsense?!? Downsampling my 16MP to 4MP so I can print like a full frame. Show me a great 4MP image downsampled MFT file of a 30x40 or 40x60 print that will rival a full frame? You can not increase dynamic range and tonal range with downsampling. All you are doing is creating a physical illusion of extended dynamic range, because the image is small. When you print big; and I have done that many times, I can tell you no one is printing 4MP to get gorgeous 30x40 prints. Besides, why do people need a D850 or EOS 5DSR?!? For that 45MP and 50MP to print beyond 40x60"!

The higher dynamic range from full frame is only significant when printing really really big. The difference will begin to show; guess what from 30x40 and up. Shows up and is noticeable at 40x60". Which is why professional galleries only accept 45-50MP full frame and medium format files! They also sell prints starting at $10,000 as well. I used to have a client who runs a gallery at Granville Island and sell those 40x60" and up and you can tell a difference between my 16MP E-P5 vs her Nikon D810 and if I am forced to print that big. But I don't.

Isn't A5 prints; like 5x8"? You don't need full frame for that. In fact,, I had once printed a 16x20" print taken with my E-P5 and shown to the store where they had a couple of professional photogs working there. All thought I shot it with a Nikon D800!

But sorry, your analysis is your personal opinion..
I was confused by that too. If downsampling was the key to IQ, why are all manufacturers pushing for more resolution?
 
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.

The higher dynamic range of FF sensor is only significant if we are looking at an image at the1:1 size or capable to see all the resolved details. But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range. For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image. From my experience 4Mpx images printed on the A5-size paper are hardly distinguished from the 16Mpx ones in terms of the visible detailes. The properly downsampled photo looks better if printed at proper size.

The proper downsampling can also be important in future, when FF-sensors with huge amount of pixels (~80 Mpx) will be typical.

I am sure that m43 system has the Future.
Sorry; but what is this nonsense?!? Downsampling my 16MP to 4MP so I can print like a full frame. Show me a great 4MP image downsampled MFT file of a 30x40 or 40x60 print that will rival a full frame? You can not increase dynamic range and tonal range with downsampling. All you are doing is creating a physical illusion of extended dynamic range, because the image is small. When you print big; and I have done that many times, I can tell you no one is printing 4MP to get gorgeous 30x40 prints. Besides, why do people need a D850 or EOS 5DSR?!? For that 45MP and 50MP to print beyond 40x60"!

The higher dynamic range from full frame is only significant when printing really really big. The difference will begin to show; guess what from 30x40 and up. Shows up and is noticeable at 40x60". Which is why professional galleries only accept 45-50MP full frame and medium format files! They also sell prints starting at $10,000 as well. I used to have a client who runs a gallery at Granville Island and sell those 40x60" and up and you can tell a difference between my 16MP E-P5 vs her Nikon D810 and if I am forced to print that big. But I don't.

Isn't A5 prints; like 5x8"? You don't need full frame for that. In fact,, I had once printed a 16x20" print taken with my E-P5 and shown to the store where they had a couple of professional photogs working there. All thought I shot it with a Nikon D800!

But sorry, your analysis is your personal opinion..
I was confused by that too. If downsampling was the key to IQ, why are all manufacturers pushing for more resolution?
I think it's because the poster has not printed big prints with higher resolution files; I mean bigger and larger than 30x40" and that's why all manufacturers are pushing for more resolution, because it can record more detail if detail on the subject are present to be captured by the higher res sensors. IQ is not only lower noise, but smoother tonal range, wider dynamic range and more accurate color separation. These are important when prints get larger, because you can then see the flaws and the limitation of the sensors. Which is why some serious landscape photographers who make a living selling big prints @ $10,000 a pop can afford to spend on a Hasselblad, Fuji GFX 50s and those Nikons, Canons and Sonys with higher MPs. They could easily make back their investment with just 1 print. For us mere mortals who don't print super big; why bother with all the pixels, unless you have to crop a lot. Which is why MFT, APS-C and 1" sensors are still very relevant today.
 
I think it's because the poster has not printed big prints with higher resolution files; I mean bigger and larger than 30x40" and that's why all manufacturers are pushing for more resolution, because it can record more detail if detail on the subject are present to be captured by the higher res sensors. IQ is not only lower noise, but smoother tonal range, wider dynamic range and more accurate color separation. These are important when prints get larger, because you can then see the flaws and the limitation of the sensors. Which is why some serious landscape photographers who make a living selling big prints @ $10,000 a pop can afford to spend on a Hasselblad, Fuji GFX 50s and those Nikons, Canons and Sonys with higher MPs. They could easily make back their investment with just 1 print. For us mere mortals who don't print super big; why bother with all the pixels, unless you have to crop a lot. Which is why MFT, APS-C and 1" sensors are still very relevant today.
It's not just print... I have a printer but I don't print most of my photos. I view them on a 40" 4K screen. When 6K and/or 8K become affordable I will jump to that. Even at 4K the IQ demands are high... it's basically a ~110 DPI A2 print. Interestingly photos from my 6MP D40 and 16MP NEX-C3 don't look too bad, but the FF stuff is on another level.

So the viewing medium definitely matters. Like you said for big prints the demands are higher. However on the flip side I can't tell the difference between FF and APS-C on an A3 print. But I'd rather have the latitude to see/print more and take advantage of whatever viewing mediums are available in the future. M43 may be good for today but what about 10, 20 years from now?
 
M43 may be good for today but what about 10, 20 years from now?
Are you going to develop superhuman vision in 10-20 years, which allows you to see some kind of "pixelation" in 20MP pictures from a viewing distance that doesn't involve you being almost glued to the picture / print?
 
Last edited:
I think it's because the poster has not printed big prints with higher resolution files; I mean bigger and larger than 30x40" and that's why all manufacturers are pushing for more resolution, because it can record more detail if detail on the subject are present to be captured by the higher res sensors. IQ is not only lower noise, but smoother tonal range, wider dynamic range and more accurate color separation. These are important when prints get larger, because you can then see the flaws and the limitation of the sensors. Which is why some serious landscape photographers who make a living selling big prints @ $10,000 a pop can afford to spend on a Hasselblad, Fuji GFX 50s and those Nikons, Canons and Sonys with higher MPs. They could easily make back their investment with just 1 print. For us mere mortals who don't print super big; why bother with all the pixels, unless you have to crop a lot. Which is why MFT, APS-C and 1" sensors are still very relevant today.
It's not just print... I have a printer but I don't print most of my photos. I view them on a 40" 4K screen. When 6K and/or 8K become affordable I will jump to that. Even at 4K the IQ demands are high... it's basically a ~110 DPI A2 print. Interestingly photos from my 6MP D40 and 16MP NEX-C3 don't look too bad, but the FF stuff is on another level.

So the viewing medium definitely matters. Like you said for big prints the demands are higher. However on the flip side I can't tell the difference between FF and APS-C on an A3 print. But I'd rather have the latitude to see/print more and take advantage of whatever viewing mediums are available in the future. M43 may be good for today but what about 10, 20 years from now?
Viewing medium definitely matters as computer monitors especially the 4K, 5K and 6K has both higher resolution and wider dynamic range (roughly 10 stops or better) than the best prints (lower resolution and narrower dynamic range roughly 6-8 stops). So yes, you will definitely see a difference with the computer screen as noise shows up more prominently on the screen @100% than on a print that's 20x30 or smaller.

As you go with much bigger prints however, than full frame stuff will start to shine. Most people don't print any larger than 13x19, which is a little smaller than A3 size. I've printed up to A3; that's my max size print now for personal use and I too can't tell the difference between my E-P5 and my Nikon FF work cameras of the past. So I kept my E-P5 as a result. Even my new 1" ZS-100 pocket camera is pretty good at that size as well.

Eventually though; I don't know when. But if full frame starts selling @ MFT prices brand new like sub $1000 not on sale, or $600 on sale will we then see a migration of most people going from smaller format to full frame. Possibly 10 to 20 years from now, but I think APS-C and MFT will still be around. I think 1" will die off past 10 years.
 
Last edited:
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
In your mind it is. I presume you are a multi-billionaire, because you know how markets are going to move. Apparently it's all foregone conclusions. LOL
Such an erudite comment. Canon, Nikon and Sony have around 87 per cent of the total camera market. Do you really think that Olympus, the second smallest camera outfit with only a fraction of the remaining 13 per cent, is going to clean the clocks of the big three now that all of them are going big on mirrorless? The best Olympus can hope for is holding on about where they are and that will be harder if Panasonic ever row back on the M43 format. This isn’t about technology. It never has been. It’s about money, marketing and brand recognition, just like the market for anything else.
 
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
In your mind it is. I presume you are a multi-billionaire, because you know how markets are going to move. Apparently it's all foregone conclusions. LOL
Such an erudite comment. Canon, Nikon and Sony have around 87 per cent of the total camera market. Do you really think that Olympus, the second smallest camera outfit with only a fraction of the remaining 13 per cent, is going to clean the clocks of the big three now that all of them are going big on mirrorless? The best Olympus can hope for is holding on about where they are and that will be harder if Panasonic ever row back on the M43 format. This isn’t about technology. It never has been. It’s about money, marketing and brand recognition, just like the market for anything else.
It is actually about price point and lust. How many MFT, APS-C and 1" sensor does one person need to have? Here in Dpreview is an exception - one person can own as few as 4, 5 or the entire MFT line of bodies and maybe add in a few APS-C and 1 or 2 1" sensor bodies. We are gear heads. We are not the typical consumer who actually just own 1 camera. If they have an APS-C camera, they are not buying a second APS or a 3rd APS body just for collection. These people actually own just 1 camera and maybe 2 lenses period. They can jump from one format to the next with little loyalty.

Full frame is popular now because most of the people who are buying were lusting for it for a long time. The recent sale Sony had proved that people do want full frame; not because it is a better format, but because they all believe the grass is always greener on the other side. Eventually though, most of these people will find that the grass is not really any greener on the other side and then stop using and leave it in the closet just like their APS-C, MFT and smaller sensors camera they had owned in the past and abandoned because they don't make them any better photographers.

Full frame does not make you a better photographer. The lies promoted by camera companies are just that; lies that makes you think you will become a better photographer once you have full frame. It is not.. It requires skills to get the quality with full frame; the same skills you would have with MFT, APS-C and 1" sensors.
 
Last edited:
  • Like
Reactions: dav
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
In your mind it is. I presume you are a multi-billionaire, because you know how markets are going to move. Apparently it's all foregone conclusions. LOL
Such an erudite comment. Canon, Nikon and Sony have around 87 per cent of the total camera market. Do you really think that Olympus, the second smallest camera outfit with only a fraction of the remaining 13 per cent, is going to clean the clocks of the big three now that all of them are going big on mirrorless? The best Olympus can hope for is holding on about where they are and that will be harder if Panasonic ever row back on the M43 format. This isn’t about technology. It never has been. It’s about money, marketing and brand recognition, just like the market for anything else.
It is actually about price point and lust. How many MFT, APS-C and 1" sensor does one person need to have? Here in Dpreview is an exception - one person can own as few as 4, 5 or the entire MFT line of bodies and maybe add in a few APS-C and 1 or 2 1" sensor bodies. We are gear heads. We are not the typical consumer who actually just own 1 camera. If they have an APS-C camera, they are not buying a second APS or a 3rd APS body just for collection. These people actually own just 1 camera and maybe 2 lenses period. They can jump from one format to the next with little loyalty.

Full frame is popular now because most of the people who are buying were lusting for it for a long time. The recent sale Sony had proved that people do want full frame; not because it is a better format, but because they all believe the grass is always greener on the other side. Eventually though, most of these people will find that the grass is not really any greener on the other side and then stop using and leave it in the closet just like their APS-C, MFT and smaller sensors camera they had owned in the past and abandoned because they don't make them any better photographers.

Full frame does not make you a better photographer. The lies promoted by camera companies are just that; lies that makes you think you will become a better photographer once you have full frame. It is not.. It requires skills to get the quality with full frame; the same skills you would have with MFT, APS-C and 1" sensors.
Great Post. Though I know there are a few things my camera could be better at, the images affected are actually very few....
 
In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
Almost anything is debatable. In my own experience, denoising before downsampling generally produces better results than doing it in the reverse order. This is particularly true of the blotchy, smoothed colour noise For instance, same shot, same NR, same resampling, same post-resampling sharpening.

4d26703b6e5f4ff982944b545b5f2beb.jpg

ab416d03f7514cf19e5ee7678c9b1485.jpg

Knickerhawk's views clearly differ, and we are discussing, as we do.
The difference in the stone columns is startling.

Is the only difference the denoising and downsampling order?

I’m sold.

--
Paul
Just an old dos guy
 
Last edited:
In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
Almost anything is debatable. In my own experience, denoising before downsampling generally produces better results than doing it in the reverse order. This is particularly true of the blotchy, smoothed colour noise For instance, same shot, same NR, same resampling, same post-resampling sharpening.

4d26703b6e5f4ff982944b545b5f2beb.jpg

ab416d03f7514cf19e5ee7678c9b1485.jpg

Knickerhawk's views clearly differ, and we are discussing, as we do.
Our views probably don't differ that much. Just by degree. You say above that denoising first "generally produces better results..." I'm actually ok with that statement, but I would qualify it by saying that the issue is highly dependent on several factors that can easily swing the situation the other way. Those factors include the specific image, the denoising and resampling tools used, the amounts applied and the photographer/editor's preferences. I would also note that in my experience the advantage of denoising first can often be minimized to the point of undetectability at realistic output viewing sizes.

Your church interior example shows well the tradeoffs involved here and why it's difficult to generalize about "better results". No doubt, the problems introduced by the aggressive denoising and sharpening have been exacerbated by the JPEG artifacts, but I suspect that the problems are already there in the original pre=JPEG conversion renderings as well. The denoised first rendering shows more detail but also accentuates the artifacts.

Top=Denoised first; Bottom=denoised after downsampling
Top=Denoised first; Bottom=denoised after downsampling

As for my cross comparison above, you indicated in your previous response to me that an experiment like this one doesn't control well for variables. On the contrary, the variables were quite well controlled, but what wasn't done properly was a repeat to confirm my initial results. I apparently messed up and mislabeled one of the layers. Below is a corrected version of the comparison with one added variation. I've also revealed which is which:

Far left=Original, not downsampled or denoised; mid-left=downsampled only;center=denoised first then downsampled; mid-right=downsampled first then denoised with same settings; far right=downsampled first then denoised with very slight adjustments in settings. (Note: standard deviation checks of uniform areas of the image show virtually no differences between the three denoised variants.)
Far left=Original, not downsampled or denoised; mid-left=downsampled only;center=denoised first then downsampled; mid-right=downsampled first then denoised with same settings; far right=downsampled first then denoised with very slight adjustments in settings. (Note: standard deviation checks of uniform areas of the image show virtually no differences between the three denoised variants.)
 
Just buy an FF camera and have done with it, if that floats your boat. It will do things an M43 camera cannot do. No amount of bizarre defensive techtalk will ever change that. I would not be in the least bit surprised if FF was the dominant format in five years’ time, with a couple of APS-C and 1” specials for the long-reach crowd and everyone else using very capable smartphones. I like M43 but Olympus versus rest of the world is a bit of a foregone conclusion, isn’t it.
Buying FF does has a sense, but only after the FF will have the number of pixels 4 times more than m43, and the lenses will be as good as those for m43 in terms of the absolute resolution. Otherwise the two systems have their advantages and disadvantages with respect to each other depending on a task.

For example, the larger pixels density and exellent tele-lenses (like Oly 300 f/4) in case of the m43 make this system to be significantly better for the world-life photography than FF systems I know. Also the excellent pro lenses like OLY 17 mm f/1.2 allow for m43 to compeet with FF at low light. I do not know f/1.2 lens for FF that is perfectly sharp across the frame (Oly 17 f/1.2 is indeed increadebly sharp at f/1.2). Even at f/2.8 FF lenses are so-so according to my criteria.

The higher dynamic range of FF sensor is only significant if we are looking at an image at the1:1 size or capable to see all the resolved details. But in majority cases, if you are interested only in the full-frame view you can downsample the m43 image and increase the dynamic range. For example, the down-sampled m43 image from 16 Mpx to 4 Mpx will have the same dynamic range as 16 Mpx FF-image. From my experience 4Mpx images printed on the A5-size paper are hardly distinguished from the 16Mpx ones in terms of the visible detailes. The properly downsampled photo looks better if printed at proper size.

The proper downsampling can also be important in future, when FF-sensors with huge amount of pixels (~80 Mpx) will be typical.

I am sure that m43 system has the Future.
Sorry; but what is this nonsense?!? Downsampling my 16MP to 4MP so I can print like a full frame. Show me a great 4MP image downsampled MFT file of a 30x40 or 40x60 print that will rival a full frame? You can not increase dynamic range and tonal range with downsampling. All you are doing is creating a physical illusion of extended dynamic range, because the image is small. When you print big; and I have done that many times, I can tell you no one is printing 4MP to get gorgeous 30x40 prints. Besides, why do people need a D850 or EOS 5DSR?!? For that 45MP and 50MP to print beyond 40x60"!

The higher dynamic range from full frame is only significant when printing really really big. The difference will begin to show; guess what from 30x40 and up. Shows up and is noticeable at 40x60". Which is why professional galleries only accept 45-50MP full frame and medium format files! They also sell prints starting at $10,000 as well. I used to have a client who runs a gallery at Granville Island and sell those 40x60" and up and you can tell a difference between my 16MP E-P5 vs her Nikon D810 and if I am forced to print that big. But I don't.

I'm not going to question the tech knowledge that you're writing here because that isn't something that I know (or care) too much about. The gallery though that doesn't take anything but full-frame and medium format files? Really? One of what is probably the greatest photography galleries is in my town and I've seen just about every kind of camera and process represented in shows there. One of the shows that I know that they were most proud of was of Robert Frank's work, which was all pretty large prints and was printed largely from old, grainy 35mm film. Even if I was using a large mpx camera, I'd probably skip showing my work at such a gallery because if they have to ask my what I shot the work with, they aren't doing a very good job of actually looking at the work.




I feel like a lot of this mpx stuff is just academic. Sure, if you're printing giant and you still expect people to walk up to the work so that they can see tiny details then I suppose that you do need to make the images with a camera that makes a giant file. So much of the time though, images aren't experienced in this way and having fine detail is really just an aesthetic decision not any kind of guarantee of the overall quality of the work. So much of painting isn't in a photo-realist style, so why does photography only get judged by measures of realism?
Isn't A5 prints; like 5x8"? You don't need full frame for that. In fact,, I had once printed a 16x20" print taken with my E-P5 and shown to the store where they had a couple of professional photogs working there. All thought I shot it with a Nikon D800!

But sorry, your analysis is your personal opinion..
 
In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
Almost anything is debatable. In my own experience, denoising before downsampling generally produces better results than doing it in the reverse order. This is particularly true of the blotchy, smoothed colour noise For instance, same shot, same NR, same resampling, same post-resampling sharpening.

4d26703b6e5f4ff982944b545b5f2beb.jpg

ab416d03f7514cf19e5ee7678c9b1485.jpg

Knickerhawk's views clearly differ, and we are discussing, as we do.
The difference in the stone columns is startling.

Is the only difference the denoising and downsampling order?

I’m sold.
Yes, same raw file, same raw conversion to 16 bit TIFF, same settings for denoising, downsampling and sharpening. I think the clearest difference is the stained glass window.

--
Ride easy, William.
Bob
 
Last edited:
In particular Bobn2 in response to a post by Jim Sterling noted a benefit to applying noise reduction to the higher resolution fullframe image before downresolving it to the mFT image size. I responded that it's, of course, true that applying extra noise reduction at any point in the post/processing chain will yield a more noise-free image BUT at the expense of some loss of resolution as well.
The principle is quite simple. A low pass filter will reduce noise. Since downsampling, properly done, includes a low pass filter, downsampling reduces noise (if you do it without the low pass filter, you replace the noise by aliasing). Noise reduction is supposed to reduce noise more than an equivalent amount of low-pass filtering. If it didn't, there wouldn't be much point having sophisticated NR, you'd just low pass. So, if instead of low-pass filtering you use NR to a level that reduces the detail to that equivalent to the low-pass filter, then you should have a lower proportion of noise. Simple as that, really.
Unfortunately, it's not that simple, really. You haven't factored in the impact of combining the NR with the chosen downsampling algorithm, which as you note will also include a low pass filter. The question I'm interested in is a practical one based on real tools such as ACR and Photoshop and how much of a visible IQ difference (if any) it makes when your workflow is NR > downsample vs. downsample > NR.
Always good to put theories to the test with examples.
I've now done quite a bit of comparisons over the past couple of days and, at least with two common NR options in PS/ACR and using various downsampling options, I can't say that it's always more effective to NR first before downsampling. Below is my attempt to clearly illustrate the principles at play. The left-most cross is from the D850 ISO6400 shot captured before any downsampling or NR has been applied. The second to left cross has been bicubic downsampled only (no NR applied). The visibly decreased noise with minimal aliasing indicates that the bicubic algorithm includes a low pass filter as you've described. The third and fourth crosses have both been bicubic downsampled. One of them had NR applied before downsampling and one of them had the same amount of NR applied after. Would you say that they show the same S/N or would you say that one is more successful?

f051cc97aefa4d648170c2085ba3ac57.jpg.png
To my eyes, the SNR of the last two are all but identical. However, the last is significantly sharper with regards to the vertical bar of the +. That said, depending on the actual widths of the bars, the second to last my be the more accurate representation.
Not sure about that. It appears to have an abrupt transition from 'light' to 'dark' on the right hand side of the vertical bar, which as John Sheehy will tel you, is something that should never happen in a properly sampled image. It looks like a downsampling artefact to me. Knickerhawk is using bicubic, which whilst it has a better inherent low-pass than nearest neighbour still doesn't have enough to prevent aliasing. Lanczos is a bit worse. The issue is really that these interpolation algorithms are a little bit anachronistic, they are designed to optimise upsampling and produce convincing interpolation, not to optimise downsampling.
If it's that debatable, then it's immaterial to the practice of photography.
Almost anything is debatable. In my own experience, denoising before downsampling generally produces better results than doing it in the reverse order. This is particularly true of the blotchy, smoothed colour noise For instance, same shot, same NR, same resampling, same post-resampling sharpening.

4d26703b6e5f4ff982944b545b5f2beb.jpg

ab416d03f7514cf19e5ee7678c9b1485.jpg

Knickerhawk's views clearly differ, and we are discussing, as we do.
Our views probably don't differ that much. Just by degree. You say above that denoising first "generally produces better results..." I'm actually ok with that statement, but I would qualify it by saying that the issue is highly dependent on several factors that can easily swing the situation the other way. Those factors include the specific image, the denoising and resampling tools used, the amounts applied and the photographer/editor's preferences. I would also note that in my experience the advantage of denoising first can often be minimized to the point of undetectability at realistic output viewing sizes.
I would disagree with none of that.
Your church interior
I'd disagree with the church interior, it's actually a house.
example shows well the tradeoffs involved here and why it's difficult to generalize about "better results". No doubt, the problems introduced by the aggressive denoising and sharpening have been exacerbated by the JPEG artifacts, but I suspect that the problems are already there in the original pre=JPEG conversion renderings as well. The denoised first rendering shows more detail but also accentuates the artifacts.

Top=Denoised first; Bottom=denoised after downsampling
Top=Denoised first; Bottom=denoised after downsampling
Sure, in practice, you wouldn't keep everything the same. With more detail, you wouldn't do as much post resampling sharpening. I'm not sure how much of what you see comes from the JPEG coding, I used quite a small file size, the better to upload to DPR. Maybe a mistake.
As for my cross comparison above, you indicated in your previous response to me that an experiment like this one doesn't control well for variables. On the contrary, the variables were quite well controlled,
I would disagree. My demonstration above also didn't control the variables well, it is just very hard to do, unless you know your tools and what they are doing inside out. Using the same settings doesn't guarantee that the same operations are actually being performed.
but what wasn't done properly was a repeat to confirm my initial results. I apparently messed up and mislabeled one of the layers. Below is a corrected version of the comparison with one added variation. I've also revealed which is which:

Far left=Original, not downsampled or denoised; mid-left=downsampled only;center=denoised first then downsampled; mid-right=downsampled first then denoised with same settings; far right=downsampled first then denoised with very slight adjustments in settings. (Note: standard deviation checks of uniform areas of the image show virtually no differences between the three denoised variants.)
Far left=Original, not downsampled or denoised; mid-left=downsampled only;center=denoised first then downsampled; mid-right=downsampled first then denoised with same settings; far right=downsampled first then denoised with very slight adjustments in settings. (Note: standard deviation checks of uniform areas of the image show virtually no differences between the three denoised variants.)
I would say that in that particular case, there is little to choose between the two right hand ones. I'm not sure that is universally true or would be true for all parts of the image.

--
Ride easy, William.
Bob
 
Last edited:
M43 may be good for today but what about 10, 20 years from now?
Are you going to develop superhuman vision in 10-20 years, which allows you to see some kind of "pixelation" in 20MP pictures from a viewing distance that doesn't involve you being almost glued to the picture / print?
No need to attack me. If M43 is all you want and need then enjoy it. But 42MP FF on 4K has been a huge step up, and when 8K rolls around I want to capitalize on that improvement too.
 

Keyboard shortcuts

Back
Top