first one to crack 100mp with a FF is a rotten....

For better or worse, we're dealing with camera companies, not software companies. But I agree that a renewed and more serious approach to data management would go a long way for the companies and the end users. You are right; there is no logical reason for an ISO6400 image to have 14 bit color depth.
The Read Noise in DN charts at Photons2Photos are good for getting an idea of the overkill. Any camera/ISO that scores about +0.33 on the left axis has just about the right bit depth for all intents and purposes, and down to -0.33 would not be all that problematic for most exposures, which don't flirt with near-black, or if the camera had very random read noise and you aren't going to be stacking the images. So, when you see the trendlines and dots rise above (or further above) +0.33 to the right of the chart, you are seeing precision overkill. If a camera/ISO scores +8.33 on the left axis, it has 8 bits more than it needs. Truth be told, many RAWs actually only use a fraction of the values with huge gaps in their histograms at high ISOs, but the way manufacturers write files, they don't wind up much smaller than if all the bits were used, or are even the same size.
 
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
 
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
 
For better or worse, we're dealing with camera companies, not software companies. But I agree that a renewed and more serious approach to data management would go a long way for the companies and the end users. You are right; there is no logical reason for an ISO6400 image to have 14 bit color depth.
The Read Noise in DN charts at Photons2Photos are good for getting an idea of the overkill. Any camera/ISO that scores about +0.33 on the left axis has just about the right bit depth for all intents and purposes, and down to -0.33 would not be all that problematic for most exposures, which don't flirt with near-black, or if the camera had very random read noise and you aren't going to be stacking the images. So, when you see the trendlines and dots rise above (or further above) +0.33 to the right of the chart, you are seeing precision overkill. If a camera/ISO scores +8.33 on the left axis, it has 8 bits more than it needs. Truth be told, many RAWs actually only use a fraction of the values with huge gaps in their histograms at high ISOs, but the way manufacturers write files, they don't wind up much smaller than if all the bits were used, or are even the same size.
This presentation of Read Noise in DN versus Bit-depth at PhotonsToPhotos makes this information pretty digestible.
I really need to add a link to this on the home page !
 
...

This presentation of Read Noise in DN versus Bit-depth at PhotonsToPhotos makes this information pretty digestible.
I really need to add a link to this on the home page !

At the URL in your post, in the Introduction, in the description of the “Green Band”:

you are missing a “less” or “more” in the phrase: “...to be _ than necessary...”.🙂
 
...

This presentation of Read Noise in DN versus Bit-depth at PhotonsToPhotos makes this information pretty digestible.
I really need to add a link to this on the home page !

At the URL in your post, in the Introduction, in the description of the “Green Band”:

you are missing a “less” or “more” in the phrase: “...to be _ than necessary...”.🙂
Thanks, I added "more" to that sentence.
 
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
There's a standard resolution?
 
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
There's a standard resolution?
It is 150 Mp. Since PhaseOne bumped up the megapixel count by 50%, Nikon/Canon/Sony need to do the same. Now, their standard is 70Mp give or take. Fuji's is 36Mp (APS-C). And Fuji/Pentax as medium format - 100Mp minimum. No excuse for measly 50MP on medium format!
 
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
There's a standard resolution?
It is 150 Mp. Since PhaseOne bumped up the megapixel count by 50%, Nikon/Canon/Sony need to do the same. Now, their standard is 70Mp give or take. Fuji's is 36Mp (APS-C). And Fuji/Pentax as medium format - 100Mp minimum. No excuse for measly 50MP on medium format!
Whenever I hear someone opine about "not needing" more resolution, I can't help thinking back to Bill Gates and not needing any more than 640k of memory in a computer. Part of the problem here is most people are still using small, crappy HD monitors (2MP) to look at increasingly detailed images. One look at a larger monitor and 4K changes the whole perspective on resolution. All the guy sees with a 24-27" HD monitor is a soft-looking image at 100%. That's not what 4K looks like at all. Computer monitors are in the Dark Ages, compared to sensors.
 
Last edited:
This is needed. Suckers, er, people are being conned into paying premiums for 36-50mp cameras when the actual resolution increase over the stalwart 24mp models is a measly 30%. Meaning, although the pixel count has just about doubled, resolution has only increased marginally. So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large.
Or you use a crop.
Two make resolution markedly better, you must quadruple the pixel count. That doubles true resolution to where you can actually and clearly make good use of the extra. So, 100mp should be a goal for manufacturers. m4/3rds is at a FF equivalent of 80mp, so 100 is no great stretch. But if mfg's do create 100mp cameras, hopefully they won't price-gouge for the privilege of owning one. High ISO fanatics who actually (think) they need 14 steps of DR can stick with 24mp.
You could use this same argument for EVERY resolution bump. BTW, it's more like 44% for 50mp vs. 24mp. My latest upgrade was from a Canon 5D II to a 5DS. The difference is very obvious. And, of course, there are more changes than just a bump in resolution.
With the Phase announcement, FF (and APS) dramatically lag in resolution.
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
There's a standard resolution?
It is 150 Mp. Since PhaseOne bumped up the megapixel count by 50%, Nikon/Canon/Sony need to do the same. Now, their standard is 70Mp give or take. Fuji's is 36Mp (APS-C). And Fuji/Pentax as medium format - 100Mp minimum. No excuse for measly 50MP on medium format!
Whenever I hear someone opine about "not needing" more resolution, I can't help thinking back to Bill Gates and not needing any more than 640k of memory in a computer. Part of the problem here is most people are still using small, crappy HD monitors (2MP) to look at increasingly detailed images. One look at a larger monitor and 4K changes the whole perspective on resolution. All the guy sees with a 24-27" HD monitor is a soft-looking image at 100%. That's not what 4K looks like at all. Computer monitors are in the Dark Ages, compared to sensors.
Indeed.
 
Whenever I hear someone opine about "not needing" more resolution, I can't help thinking back to Bill Gates and not needing any more than 640k of memory in a computer.
Except he almost certainly never said that.

Anyway, the growth of computer memory capacity isn't comparable to the growth of reasonable resolution for human consumption.

The earliest personal computers I remember had 1KB of memory. It's easy for a computer or other personal device today to have 64GB of memory. That's an increase by a factor of 64,000,000 in about 40 years.

The first digital camera I owned had a resolution of a little over 1mp. Does anyone think humans will be routinely shooting and consuming photos of 64,000,000mp (64,000 gigapixels) in a few decades? That would equate to a 300dpi print that's more than half a mile across the long edge. I think a reasonable resolution limit somewhat lower than that will be reached.
 
Last edited:
If they're going to push further with megapixels they have to get more clever about how they use them.
Surely that's the photographer's job?
I mean in terms of managing them in useful ways for the photographer. For example I showed that downsampled images from high res sensor still retain more detail. Having a way to downsample while retaining full data in camera is one less step in the processing workflow. I don't think 100MP photographs are of much use to most people, but the added detail captured by a 100MP sensor in a downsampled photograph most certainly is.
This last sentence is somewhat confusing. On the one hand, you say 100 MP photographs aren't useful for most people but on the other hand, you state, quite correctly, the number one reason for why most folks would benefit from 100 MP (whether they realize it or not).

Every time Canon released a higher resolution 5 series camera, I would compare the new to the old at the actual image size I commonly use. I almost always down sample by the way. The 5D2 was a revelation and forever cemented the idea that more MPs are better when resolution of fine detail matters. (This matters for me 100% of the time.) The 5DsR was a significant step up too.

However, if the image/print size does not increase as the MPs at capture grow (cropping being a different topic), you may find yourself with a state of diminishing returns. The year I bought a 5DsR and 645z, I also upgraded from an HD class monitor to a 32" 4k monitor. If I were still viewing my images on an HD monitor, I more than likely would be scratching my head wondering why I spent so much money.

100/150 MP is a current reality in medium format. 100 MPs more than likely will be within my budget, but, apart from cropping, I am concerned whether 100 MPs will help the final image quality of my particular application or be weakened due to so many pixels being thrown away via down sampling. 100 MP in MF will arrive sooner than higher resolution monitors will arrive.

Your thoughts?
 
This is typical of many of the posts here. Based on flawed logic and numbers rather than actual observation and undertanding.

You are completely wrong, and have it the complete opposite of what really happens.

"So marginal that although it's visible at 100% onscreen, it's almost invisible in print, unless the prints are very large."

Fact: Screen resolution is fixed. The pixel density is hard wired into the hardware, you can neither increase or decrease it, all you can do is show the photo at 100% or less. It changes the size of the viewed image not the resolution.

Print is precisely where it's visible. If you can't get an A3 print of a 46MP image to show better gradation in tones than one shot at 24Mp then you really don't know what you're doing.

I don't think you understand resolution and how it's perceived at all. Being able to *zoom in* and see extra detail is not resolution. On a screen it's simply showing a larger file at *the same resolution*.

Extra resolution in print shows better textures, better gradations between tones, the appearance of sharper edges. Every day we look at things in the real world that have detail beyond our visual acuity. If you can't understand that when you do the same in print things will appear more real then you need to go back to school.

I'm just getting tired of all this false logic based on nothing more than what people wish to believe. It's no substitute for observation or principles that have been widely proved and accepted as truth. You're basically just making this up as you go along. ;-)
 
Your thoughts?
I don't know the math in how much detail more native resolution adds to a downsampled image, but after seeing the detail from the D850 I'm convinced.

When I said a 100MP photograph is not of much use that's what I meant. I too view my photos in 4K so I'm either looking at the photos between ~6MP full size or 8.3MP 100% chunks. My biggest prints are about A2 size. Even at 42MP, going to 100% feels like overkill. Managing shake and overall data becomes a pain. However, high ISO shots look way better stopped down, and again as that D850 studio comparison showed I'm getting more detail.

So I think 100MP and beyond can be of use as the base capture, but I'd want the camera to be able to bin pixels for the files. 25MP is a good resolution, and I think with good on chip processing they may even be able to allow faster FPS for lower resolution settings, giving you something like a 5DS and 1Dx in one body. So I'm still not convinced on 100MP output for most people, but I think there is value in capturing 100MP+ on the sensor to generate higher quality "lower" resolution images... obviously with the option to get full resolution if one wants. Especially in the context of 6 and 8K monitors being an eventual inevitability.
 
Your thoughts?
I don't know the math in how much detail more native resolution adds to a downsampled image, but after seeing the detail from the D850 I'm convinced.

When I said a 100MP photograph is not of much use that's what I meant. I too view my photos in 4K so I'm either looking at the photos between ~6MP full size or 8.3MP 100% chunks. My biggest prints are about A2 size. Even at 42MP, going to 100% feels like overkill. Managing shake and overall data becomes a pain. However, high ISO shots look way better stopped down, and again as that D850 studio comparison showed I'm getting more detail.

So I think 100MP and beyond can be of use as the base capture, but I'd want the camera to be able to bin pixels for the files. 25MP is a good resolution, and I think with good on chip processing they may even be able to allow faster FPS for lower resolution settings, giving you something like a 5DS and 1Dx in one body. So I'm still not convinced on 100MP output for most people, but I think there is value in capturing 100MP+ on the sensor to generate higher quality "lower" resolution images... obviously with the option to get full resolution if one wants. Especially in the context of 6 and 8K monitors being an eventual inevitability.
I had noticed that there was a visible difference in 36Mp images even when viewed fairly small. There was a more continuous tone look to gradients, skies and various textures and surfaces. It wasn't hard to pick the 36Mp images out on the old "guess the format" website, but I was never sure if it was the extra DR or resolution that made it so.

If we ever go to 100Mp in 135 format, or even 75Mp, I'm curious if there will be an incremental improvement in the same way.
 
What do you mean by binning pixels? I think people have disparate meanings in mind when they use the term. Hardware binning would have a single analog read operation for multiple pixels that are binned. Is that what you meant? Or did you mean some post a/d operation?
 
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
Part of the issue may be the market. There seem to be a lot of photographers with disposable income who like to see maximum sharpness and SNR at 100% pixel view, not just from the lens, which is a natural desire even if it results in more aliasing, but from having larger pixels, so there tends to be mixed feelings. People are only happy with increasing pixel density when SNR improvements keep up somewhat, and they get better lenses and better stability. Most people don't understand that all else being equal, increased pixel density can be a good thing, even when 100% pixel views "suffer" more softness and noise; there is no reason that pixel noise and lens sharpness have to keep up with the density. The density alone can be valuable.

There is the issue of storage/bandwidth limits, so we can't expect all cameras to jump to very high pixel densities, but there is room for specialist cameras. I'd buy a 150MP FF camera if the noise was random and it only took 1 frame per second. There are certain situations in which nothing else will do quite as well. Stitching has imperfect seams. Both it and pixel-shift are for static subjects.

Call me a rebel, but I'd want a mild AA filter on a 150MP FF, so I could do things like rip out the RAW red channel to get a pure red-filtered B&W with subdued aliasing. Most people have no idea how extremely aliased the RAW red and blue channels can be with current pixel densities and sharp lenses and solid-enough stability, even with AA filters.
 
Last edited:
Yea, 42Mp/50Mp is ancient news. Way behind industry standards now.
Part of the issue may be the market. There seem to be a lot of photographers with disposable income who like to see maximum sharpness and SNR at 100% pixel view, not just from the lens, which is a natural desire even if it results in more aliasing, but from having larger pixels, so there tends to be mixed feelings. People are only happy with increasing pixel density when SNR improvements keep up somewhat, and they get better lenses and better stability. Most people don't understand that all else being equal, increased pixel density can be a good thing, even when 100% pixel views "suffer" more softness and noise; there is no reason that pixel noise and lens sharpness have to keep up with the density. The density alone can be valuable.

There is the issue of storage/bandwidth limits, so we can't expect all cameras to jump to very high pixel densities, but there is room for specialist cameras. I'd buy a 150MP FF camera if the noise was random and it only took 1 frame per second. There are certain situations in which nothing else will do quite as well. Stitching has imperfect seams. Both it and pixel-shift are for static subjects.

Call me a rebel, but I'd want a mild AA filter on a 150MP FF, so I could do things like rip out the RAW red channel to get a pure red-filtered B&W with subdued aliasing. Most people have no idea how extremely aliased the RAW red and blue channels can be with current pixel densities and sharp lenses and solid-enough stability, even with AA filters.
I concur.
 

Keyboard shortcuts

Back
Top