Why B&W digital cameras don't have 200% more resolution or less noise?

sergiotous

Well-known member
Messages
233
Reaction score
134
Location
Madrid, ES
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?

Thanks!
 
Last edited:
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.
Are you talking a Bayer CFA? Then there are four planes, not three.
If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
In general, monochromatic sensors are about twice as efficient in converting photons to electrons as Bayer CFA sensors. That will mean half a stop higher photon signal to noise ratio in light-limited situations. If you can put as much light on the sensor as you want, the SNR will be about the same.

Resolution as measured by the slanted edge MTF is a function of pixel aperture, which is likely the same. The mono sensor will have less aliasing, though.

--
https://blog.kasson.com
 
Last edited:
Look at the monochrome Leica in the test scene. It has a very low noise at high ISO compared to the color cams, if you mentally convert their images to B&W or do it digitally. It has a huge color noise however. 🙂

Such a comparison is not fair however. Typical cameras are designed to capture color. Except for the loss of light in most situations, the captured RAW is mosaiced, kinda incredibly noisy from a monochrome point of view.

BTW, the only spectral sensitivity diagram for the monochrome Leica I was able to google shows a strong bump in the middle of the spectrum. It is almost like a G Bayer filter but wider. That may mean a not so great response in some extreme colored lighting but I am speculating here.
 
Thank you for your very informative posts. Now I want to try a mochrome camera :) haha
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?

Thanks!
Ok I think a more simple answer is that you use the same number of pixels for the BW and and Bayer image. You don't combine the 4 RGGB pixels on the sensor to one pixel on the final image. The filter just helps to determine what the color of each pixel should be.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.
Are you talking a Bayer CFA? Then there are four planes, not three.
If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
In general, monochromatic sensors are about twice as efficient in converting photons to electrons as Bayer CFA sensors. That will mean half a stop higher photon signal to noise ratio in light-limited situations.
But if the sensor is twice as efficient this means this is 1 stop better ? This might not contradict what you say with SNR but shouldn't we say that the sensor is 1 stop better ?

For instance a full frame sensor is more than 1 stop better compared to APS-C because it collects twice as much light. Likewise if a sensor is twice as efficient this makes sense to say it is 1 stop better.

Am I missing something ?
If you can put as much light on the sensor as you want, the SNR will be about the same.

Resolution as measured by the slanted edge MTF is a function of pixel aperture, which is likely the same. The mono sensor will have less aliasing, though.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.
Are you talking a Bayer CFA? Then there are four planes, not three.
If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
In general, monochromatic sensors are about twice as efficient in converting photons to electrons as Bayer CFA sensors. That will mean half a stop higher photon signal to noise ratio in light-limited situations.
But if the sensor is twice as efficient this means this is 1 stop better ?
In light-limited situations, the photon noise SNR will be half a stop greater.
This might not contradict what you say with SNR but shouldn't we say that the sensor is 1 stop better ?
Not without qualification.
For instance a full frame sensor is more than 1 stop better compared to APS-C because it collects twice as much light.
You need to define better quantitatively to make statements like that.
Likewise if a sensor is twice as efficient this makes sense to say it is 1 stop better.
Define "better". Then we can talk.
Am I missing something ?
I think so.
If you can put as much light on the sensor as you want, the SNR will be about the same.

Resolution as measured by the slanted edge MTF is a function of pixel aperture, which is likely the same. The mono sensor will have less aliasing, though.
--
https://blog.kasson.com
 
Last edited:
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.
Are you talking a Bayer CFA? Then there are four planes, not three.
If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
In general, monochromatic sensors are about twice as efficient in converting photons to electrons as Bayer CFA sensors. That will mean half a stop higher photon signal to noise ratio in light-limited situations.
But if the sensor is twice as efficient this means this is 1 stop better ?
In light-limited situations, the photon noise SNR will be half a stop better.
This might not contradict what you say with SNR but shouldn't we say that the sensor is 1 stop better ?
Not without qualification.
For instance a full frame sensor is more than 1 stop better compared to APS-C because it collects twice as much light.
You need to define better quantitatively to make statements like that.
Likewise if a sensor is twice as efficient this makes sense to say it is 1 stop better.
Define "better". Then we can talk.
Am I missing something ?
I think so.
This was simply a question of curiosity. In fact, I just wanted to make sure I understand. I have never said you were wrong and this was also my guess that there is no contradiction with what you say.

So I think we agree.

What I am just saying is that without qualification I woud personnally tend to say that the monochrome sensor is 1 stop better. This is the comparison metric I usually take

To define it, this is how much more/less light you need to equalise the SNR of the other sensor.

So I think everything is clear but do not hesitate to correct me if there are errors.
If you can put as much light on the sensor as you want, the SNR will be about the same.

Resolution as measured by the slanted edge MTF is a function of pixel aperture, which is likely the same. The mono sensor will have less aliasing, though.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
But I consider that the difference is important.

Ususally we start to see a difference when the difference is 1/3 stop. But 1 stop makes an important difference.

Is it worth ? I do not know.

Color can be usefull to create monochrome images when you edit, to add contrast for a specific color. But personnally, I am not a fan of these kind of artificial effects so I think a monochrome sensor is perfect for B&W !

If I had unlimited money, no doubts I woud add a monochrome camera to my package.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.
Are you talking a Bayer CFA? Then there are four planes, not three.
If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
In general, monochromatic sensors are about twice as efficient in converting photons to electrons as Bayer CFA sensors. That will mean half a stop higher photon signal to noise ratio in light-limited situations.
But if the sensor is twice as efficient this means this is 1 stop better ?
In light-limited situations, the photon noise SNR will be half a stop better.
This might not contradict what you say with SNR but shouldn't we say that the sensor is 1 stop better ?
Not without qualification.
For instance a full frame sensor is more than 1 stop better compared to APS-C because it collects twice as much light.
You need to define better quantitatively to make statements like that.
Likewise if a sensor is twice as efficient this makes sense to say it is 1 stop better.
Define "better". Then we can talk.
Am I missing something ?
I think so.
This was simply a question of curiosity. In fact, I just wanted to make sure I understand. I have never said you were wrong and this was also my guess that there is no contradiction with what you say.

So I think we agree.

What I am just saying is that without qualification I woud personnally tend to say that the monochrome sensor is 1 stop better. This is the comparison metric I usually take

To define it, this is how much more/less light you need to equalise the SNR of the other sensor.
OK. That's right if that is your definition of better. I would define better in terms of the results with the same capture conditions.
So I think everything is clear but do not hesitate to correct me if there are errors.
You've got it.
If you can put as much light on the sensor as you want, the SNR will be about the same.

Resolution as measured by the slanted edge MTF is a function of pixel aperture, which is likely the same. The mono sensor will have less aliasing, though.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
But I consider that the difference is important.

Ususally we start to see a difference when the difference is 1/3 stop. But 1 stop makes an important difference.

Is it worth ? I do not know.

Color can be usefull to create monochrome images when you edit, to add contrast for a specific color. But personnally, I am not a fan of these kind of artificial effects so I think a monochrome sensor is perfect for B&W !

If I had unlimited money, no doubts I woud add a monochrome camera to my package.
You might want to take a look here:

 
Likewise if a sensor is twice as efficient this makes sense to say it is 1 stop better.
Define "better". Then we can talk.
Am I missing something ?
I think so.
This was simply a question of curiosity. In fact, I just wanted to make sure I understand. I have n issing something ?
I think so.
This was simply a question of curiosity. In fact, I just wanted to make sure I understand. I have never said you were wrong and this was also my guess that there is no contradiction with what you say.

So I think we agree. ever said you were wrong and this was also my guess that there is no contradiction with what you say.

So I think we agree.

What I am just saying is that without qualification I woud personnally tend to say that the monochrome sensor is 1 stop better. This is the comparison metric I usually take

To define it, this is how much more/less light you need to equalise the SNR of the other sensor.
OK. That's right if that is your definition of better. I would define better in terms of the results with the same capture conditions.
Nice script for an episode of Seinfeld.

It is very difficult to take color images with a sensor without CFA. So I would say if you want color images, a sensor with CFA is a far better choice.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
But I consider that the difference is important.

Ususally we start to see a difference when the difference is 1/3 stop. But 1 stop makes an important difference.

Is it worth ? I do not know.

Color can be usefull to create monochrome images when you edit, to add contrast for a specific color. But personnally, I am not a fan of these kind of artificial effects so I think a monochrome sensor is perfect for B&W !

If I had unlimited money, no doubts I woud add a monochrome camera to my package.
You might want to take a look here:

https://blog.kasson.com/?s=q2m
Thank you.

Looking also at some other comparison links (Q2 vs Q2M), it looks difficult to justify such a purchase.
 
You might want to take a look here:

https://blog.kasson.com/?s=q2m
Thank you.

Looking also at some other comparison links (Q2 vs Q2M), it looks difficult to justify such a purchase.
For some people, making some kinds of images, a monochromatic camera is a good choice. The improvements in quality over a similar Bayer CFA camera for monochromatic images is often subtle. So, for some people making some kinds of images, it is indeed hard to justify the expense and loss of versatility associated with the purchase of a monochromatic camera.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
Just because someone does a comparison does not mean that it is meaningful.

For example, setting two systems to the same manual ISO in Av-priority mode is a TOTALLY BOGUS comparison.

There are really only two things that interest me when comparing color sensor monochrome to monochrome sensor monochrome.

One is ETTR at base ISO, and seeing how far down the shadows are usable. The other is how they compare for noise way above base ISO, with the exact same exposure, and Av and Tv values. That is all that matters for "high ISO low-light situations"; comparing the two systems in Av mode and the same manual ISO has absolutely nothing to do with anything practical, whatsoever, and the metering will nearly-normalize and nearly-mask any differences in QE. No environment dictates the practice of using a specific ISO setting.

Most comparisons that people do are too diluted to give any clarity as to potential differences; comparing noise at base ISO, for example, tells very little about the difference or ratio of SNRs, because noise is mostly subliminal and lost in processing. The color of the subject matter and the light source also make all the difference in the world, in the difference between color and monochrome sensors. You will see very little difference metered for the same ISO when the light or subject matter is a pink-ish magenta, because all the photowells are filling almost equally, regardless of color channel. For daylight, and white highlights, then the red channel will fall about a stop behind the green channel in filling the wells, and 1/2 stop behind in the blue channel, still, not a huge difference at lower ISOs. What if the light is a red LED, though? Now, the difference in QE is huge, about 12:1 or more.

There are no simple answers, here. You have to understand a lot of details to estimate how and why the photowells will fill with various wavelengths, and how metering works.

As to resolution, the red LED is a good example of an extreme. Almost nothing will be recorded in 3/4 of the photowells, and you get low resolution with egregious potential aliasing with a color sensor. With the monochrome sensor, they all fill nearly equal to their neighbors, and fill much faster.
 
Last edited:
What I am just saying is that without qualification I woud personnally tend to say that the monochrome sensor is 1 stop better. This is the comparison metric I usually take
QE is not monolithic. It is globally per-wavelength (and different per wavelength in different color channels). If you do ETTR at base ISO, and the highlight objects are white, and the light in the room is magenta/pink and white/grey objects fill the wells in all three color channels at the same rate, then you will collect just as much total charge in the color sensor as the monochrome sensor doing full-well, base-ISO ETTR. If the light is daylight, then the blue channel will fall a half stop behind the green channel, and the red channel, about a stop, so less charge will be collected in the color sensor, even with ETTR. Use a red 650nm LED as the light source, however, and now the color sensor has 3/4 of its pixels with nothing to offer but noise (unless there are extreme specular highlights which might rise well above the noise floor in the green or blue channel). This also means 1/2 the linear resolution, and very high aliasing potential.

That's base-ISO ETTR, which has enough differences with different spectra despite the pushing against the wall of full-well (or full-raw) capacity, but when you set a manual exposure, in limited light, then the differences become even greater, but these are often overlooked because people make the mistake of thinking that "ISO 12800 at f/4 in Av-priority mode" is an actual practical comparison, when in fact, it is purely academic. The real need of a photographer in low light is to be able to use the slowest shutter speed that won't cause unwanted blur, and the desired f-number or pupil size, and so any low-light comparison at elevated ISOs should NOT make the ISO the same, but make the Av and Tv values the same. In some kinds of lighting scenarios, that may make differences of up to 12x or more in total photowell charge.
 
If understand correctly, each color pixel is made of three 3 photosites: one sensible for light in red color spectrum, another for green and the last one for blue.

If you make a sensor of just one color light sensible photosites (therefore 1 photosite equals 1pixel) my instinct tells me that either the pixels will be 200% bigger or you can squeeze 200% more pixels into the same surface of the sensor.

However I heard there is only a slight difference between monochrome sensors and color ones when it comes to noise or resolution. Why is that?
But I consider that the difference is important.

Ususally we start to see a difference when the difference is 1/3 stop. But 1 stop makes an important difference.

Is it worth ? I do not know.

Color can be usefull to create monochrome images when you edit, to add contrast for a specific color. But personnally, I am not a fan of these kind of artificial effects so I think a monochrome sensor is perfect for B&W !

If I had unlimited money, no doubts I woud add a monochrome camera to my package.
You might want to take a look here:

https://blog.kasson.com/?s=q2m
Thank you.

Looking also at some other comparison links (Q2 vs Q2M), it looks difficult to justify such a purchase.
You need to consider that the difference between the two types of sensors is small in many comparisons, because the comparisons do not fully exploit potential differences. The closer the light source is to the native "white" of the CFA (usually a pinkish-magenta with most Bayer CFAs), the less difference there will be, and the more you allow the exposure settings and metering to normalize captured charge, rather than normalizing actual exposure (which is agnostic of the sensor type), then the more similar the results will be.
 
As to resolution, the red LED is a good example of an extreme. Almost nothing will be recorded in 3/4 of the photowells, and you get low resolution with egregious potential aliasing with a color sensor. With the monochrome sensor, they all fill nearly equal to their neighbors, and fill much faster.
Here's what happens in red light:

 
As to resolution, the red LED is a good example of an extreme. Almost nothing will be recorded in 3/4 of the photowells, and you get low resolution with egregious potential aliasing with a color sensor. With the monochrome sensor, they all fill nearly equal to their neighbors, and fill much faster.
Here's what happens in red light:

https://blog.kasson.com/leica-q2-monochrom/q2-monochrom-gfx-100s/
Very thoughtful of you to use the #29, as that prevents "cheating" in resolution and aliasing, which would be possible if a color->greyscale conversion got overall gross tonality from only the red channel, but got fine detail from them all.

That's not to say that such "cheating" would be wrong if that was what was wanted, but the use of the #29 makes the comparison more truly red-only.
 
You might want to take a look here:

https://blog.kasson.com/?s=q2m
Thank you.

Looking also at some other comparison links (Q2 vs Q2M), it looks difficult to justify such a purchase.
For some people, making some kinds of images, a monochromatic camera is a good choice. The improvements in quality over a similar Bayer CFA camera for monochromatic images is often subtle. So, for some people making some kinds of images, it is indeed hard to justify the expense and loss of versatility associated with the purchase of a monochromatic camera.
It would be better to buy a camera with a larger sensor, and then convert its large images to monochrome.

Don
 

Keyboard shortcuts

Back
Top