DPReview.com is closing April 10th - Find out more

What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

Started Mar 21, 2022 | Questions
MacM545 Contributing Member • Posts: 783
What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

More so, what if the method of obtaining the photograph might be to use the same method as can be done for advanced astrophotography, using a Monochrome camera? The sensor is Monochrome, but three identical photos can be achieved whereas each photo was done using a filter. With the specific processing, a color Image can be made. The Monochrome camera, particularly one without a Hot Mirror, can see UV, Visible, and IR. Now, one might argue that you can simply use a full spectrum camera with a CFA, though what if the CFA isn't there? Would there be any difference beyond simply the better resolution and light sensitivity? Presumably, if anything, the colors might be different after white balance, assuming that the spectral response might not be the same for a Monochrome camera than that of a Full Spectrum Color camera.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
ANSWER:
This question has not been answered yet.
D Cox Forum Pro • Posts: 32,980
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

You might ask this question on the Sigma forum. Some of the Sigma cameras can be easily converted to full spectrum by the user, so there are several people on that forum who have experimented.

Don Cox

 D Cox's gear list:D Cox's gear list
Sigma fp
SterlingBjorndahl Senior Member • Posts: 2,638
Achieving focus is a challenge

Achieving focus at all wavelengths is a challenge. UV and IR refract at different angles. Without taking that into account, your B&W image will just look somewhat blurry. If you have any old lenses around from film SLRs you can see that most of them have special marks for focusing when you're using IR film; that's how different near-IR is from visible light. As an example, the Panasonic GH2 was overly-sensitive to IR and UV, so for the sharpest images (using visible light) some people added UV-IR "cut" filters to their lenses.

Most of the time when using full spectrum, the UV component doesn't contribute a lot because the atmosphere blocks most of it.

Here's a graphic showing how much of the sun's energy reaches the Earth's surface, by wavelength.

https://en.wikipedia.org/wiki/Electromagnetic_spectrum#/media/File:Atmospheric_electromagnetic_opacity.svg

You may be able to get more technical detail from DPR's "Photographic Science and Technology" forum.

Sterling
--
Lens Grit

 SterlingBjorndahl's gear list:SterlingBjorndahl's gear list
Olympus Air Panasonic Lumix DMC-GX85 Panasonic Leica D Vario-Elmar 14-150mm F3.5-5.6 Asph Mega OIS Panasonic Lumix G X Vario PZ 14-42mm F3.5-5.6 ASPH OIS Panasonic Lumix DMC-FZ50 +18 more
Bernard Delley Senior Member • Posts: 2,041
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
2

MacM545 wrote:

More so, what if the method of obtaining the photograph might be to use the same method as can be done for advanced astrophotography, using a Monochrome camera? The sensor is Monochrome, but three identical photos can be achieved whereas each photo was done using a filter. With the specific processing, a color Image can be made. The Monochrome camera, particularly one without a Hot Mirror, can see UV, Visible, and IR. Now, one might argue that you can simply use a full spectrum camera with a CFA, though what if the CFA isn't there? Would there be any difference beyond simply the better resolution and light sensitivity? Presumably, if anything, the colors might be different after white balance, assuming that the spectral response might not be the same for a Monochrome camera than that of a Full Spectrum Color camera.

for normal photography it is not of much interest. It is being done for science, for exaple the multispectral setup for mars Rover mastcams . It helps to distinguish rock composition at distance.

 Bernard Delley's gear list:Bernard Delley's gear list
Olympus TG-6 Nikon D7200 Nikon D500 Nikon D850 Nikon Z7 II +17 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
3

MacM545 wrote:

More so, what if the method of obtaining the photograph might be to use the same method as can be done for advanced astrophotography, using a Monochrome camera? The sensor is Monochrome, but three identical photos can be achieved whereas each photo was done using a filter. With the specific processing, a color Image can be made.

Not much processing: simply add the 3 filtered images as color planes of a single image. If the camera or scene has moved between exposures, there is some alignment needed... but that's a losing battle, because details in one color channel often literally don't exist in another, so there might not be common image features to align.

The Monochrome camera, particularly one without a Hot Mirror, can see UV, Visible, and IR.

No, it really can't see UV nor IR. MaxMax.com is  company that does full-spectrum mods, and they have detailed explanations and many sample images.

A typical CMOS sensor barely makes it into NIR (Near IR), with the best maybe reaching to about 1100nm (more commonly dropping sharply by about 850nm), which is several times shorter wavelength than people generally associate with IR. IR cameras, such as those sold by FLIR, tend to use microbolometer sensors -- a completely different sensor technology that basically measures accumulated energy from photons rather than counting a unit of charge per sufficiently-high-energy photon, which is approximately how CMOS and CCD sensors work. BTW, some high-resolution CMOS sensors in cell phones now have pixels smaller than the wavelength of deep red light, so they can't even see NIR. There also has been a trend toward stronger NIR cutoff filters in cameras with larger sensors, often clipping anything much longer than 650nm.

CMOS sensors don't see very far into UV, but the real limit is that glass -- as used in lenses -- blocks most UV. There are lenses not made of glass so that they pass lots of UV, but there are not many options and all are expensive; most people just seek-out relatively simple (few element) lenses that don't fall off too sharply above 400nm, but response is usually unusable by 330nm or so. Probably best to call this NUV....

Speaking of lenses, you also should know that lenses do NOT bring all light wavelengths to the same focus plane. Conventional lenses are often designed to have only two particular wavelengths agree on the focus plane, while true APO lenses are optimized for three or more wavelengths to agree (e.g., wavelengths centered on the usual red, green, and blue filter response curves). Catch is, not many lenses have NIR or NUV light focus anywhere near the same plane as visible. So, expect focus issues. This also is where PF comes from.

Now, one might argue that you can simply use a full spectrum camera with a CFA, though what if the CFA isn't there? Would there be any difference beyond simply the better resolution and light sensitivity? Presumably, if anything, the colors might be different after white balance, assuming that the spectral response might not be the same for a Monochrome camera than that of a Full Spectrum Color camera.

As I mentioned above, the trend has been for cameras to be more aggressive about clipping NIR, and that might also apply to things like the red CFA filter by itself.

Fundamentally, each CFA filter has a spectral profile that lets different fractions of incident photons pass for different wavelengths. Thus, using a CFA without any NIR cut filter still gives you different spectral sensitivity profiles for each of the color channels -- basically, more color information than a monochrome sensor would record, but lower sensitivity (due to light blocked by the CFA filters, which is roughly 2/3 of the spectrum for each filter color) and lower resolution of some wavelengths (for example, red light would only get sensed by 1 of each 4 pixels in a typical CFA).

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
OP MacM545 Contributing Member • Posts: 783
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

ProfHankD wrote:

MacM545 wrote:

More so, what if the method of obtaining the photograph might be to use the same method as can be done for advanced astrophotography, using a Monochrome camera? The sensor is Monochrome, but three identical photos can be achieved whereas each photo was done using a filter. With the specific processing, a color Image can be made.

Not much processing: simply add the 3 filtered images as color planes of a single image. If the camera or scene has moved between exposures, there is some alignment needed... but that's a losing battle, because details in one color channel often literally don't exist in another, so there might not be common image features to align.

The Monochrome camera, particularly one without a Hot Mirror, can see UV, Visible, and IR.

No, it really can't see UV nor IR. MaxMax.com is company that does full-spectrum mods, and they have detailed explanations and many sample images.

A typical CMOS sensor barely makes it into NIR (Near IR), with the best maybe reaching to about 1100nm (more commonly dropping sharply by about 850nm), which is several times shorter wavelength than people generally associate with IR. IR cameras, such as those sold by FLIR, tend to use microbolometer sensors -- a completely different sensor technology that basically measures accumulated energy from photons rather than counting a unit of charge per sufficiently-high-energy photon, which is approximately how CMOS and CCD sensors work. BTW, some high-resolution CMOS sensors in cell phones now have pixels smaller than the wavelength of deep red light, so they can't even see NIR. There also has been a trend toward stronger NIR cutoff filters in cameras with larger sensors, often clipping anything much longer than 650nm.

I knew that about the hot mirror filters. Fascinating it was, to read about smaller pixels being less sensitive to Infrared.

CMOS sensors don't see very far into UV, but the real limit is that glass -- as used in lenses -- blocks most UV. There are lenses not made of glass so that they pass lots of UV, but there are not many options and all are expensive; most people just seek-out relatively simple (few element) lenses that don't fall off too sharply above 400nm, but response is usually unusable by 330nm or so. Probably best to call this NUV....

Speaking of lenses, you also should know that lenses do NOT bring all light wavelengths to the same focus plane. Conventional lenses are often designed to have only two particular wavelengths agree on the focus plane, while true APO lenses are optimized for three or more wavelengths to agree (e.g., wavelengths centered on the usual red, green, and blue filter response curves). Catch is, not many lenses have NIR or NUV light focus anywhere near the same plane as visible. So, expect focus issues. This also is where PF comes from.

What is PF?

Now, one might argue that you can simply use a full spectrum camera with a CFA, though what if the CFA isn't there? Would there be any difference beyond simply the better resolution and light sensitivity? Presumably, if anything, the colors might be different after white balance, assuming that the spectral response might not be the same for a Monochrome camera than that of a Full Spectrum Color camera.

As I mentioned above, the trend has been for cameras to be more aggressive about clipping NIR, and that might also apply to things like the red CFA filter by itself.

Fundamentally, each CFA filter has a spectral profile that lets different fractions of incident photons pass for different wavelengths. Thus, using a CFA without any NIR cut filter still gives you different spectral sensitivity profiles for each of the color channels -- basically, more color information than a monochrome sensor would record, but lower sensitivity (due to light blocked by the CFA filters, which is roughly 2/3 of the spectrum for each filter color) and lower resolution of some wavelengths (for example, red light would only get sensed by 1 of each 4 pixels in a typical CFA).

I've tested a full spectrum camera myself. I've not comprehensively seen UV using the camera, but it can definitely see Near Infrared to some extent. It is possible to see longer wavelength UV using a full spectrum camera as can be seen on Flickr UV groups.

Full Spectrum Monochrome can see a higher amount of Infrared.. A post has existed on Dpreview before, about a monochrome camera being able to about six times more UV, Visible, and IR than a regular full spectrum camera. I'm not trying to be offensive, but I thought you might know this already.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
1

MacM545 wrote:

ProfHankD wrote:

Speaking of lenses, you also should know that lenses do NOT bring all light wavelengths to the same focus plane. Conventional lenses are often designed to have only two particular wavelengths agree on the focus plane, while true APO lenses are optimized for three or more wavelengths to agree (e.g., wavelengths centered on the usual red, green, and blue filter response curves). Catch is, not many lenses have NIR or NUV light focus anywhere near the same plane as visible. So, expect focus issues. This also is where PF comes from.

What is PF?

Purple Fringing (try the link). It's usually mostly out-of-focus NIR, but some OOF NUV is mixed in too.

Now, one might argue that you can simply use a full spectrum camera with a CFA, though what if the CFA isn't there? Would there be any difference beyond simply the better resolution and light sensitivity? Presumably, if anything, the colors might be different after white balance, assuming that the spectral response might not be the same for a Monochrome camera than that of a Full Spectrum Color camera.

As I mentioned above, the trend has been for cameras to be more aggressive about clipping NIR, and that might also apply to things like the red CFA filter by itself.

Fundamentally, each CFA filter has a spectral profile that lets different fractions of incident photons pass for different wavelengths. Thus, using a CFA without any NIR cut filter still gives you different spectral sensitivity profiles for each of the color channels -- basically, more color information than a monochrome sensor would record, but lower sensitivity (due to light blocked by the CFA filters, which is roughly 2/3 of the spectrum for each filter color) and lower resolution of some wavelengths (for example, red light would only get sensed by 1 of each 4 pixels in a typical CFA).

I've tested a full spectrum camera myself. I've not comprehensively seen UV using the camera, but it can definitely see Near Infrared to some extent. It is possible to see longer wavelength UV using a full spectrum camera as can be seen on Flickr UV groups.

You can get UV down to about 330nm if everything is right, but response usually dives well before that, and many lenses have terrible hot spots and other NUV issues.

Full Spectrum Monochrome can see a higher amount of Infrared.. A post has existed on Dpreview before, about a monochrome camera being able to about six times more UV, Visible, and IR than a regular full spectrum camera. I'm not trying to be offensive, but I thought you might know this already.

It shouldn't be 6X, especially in visible light, but sort-of can be. Basically, red, green, and blue filters each take out about 2/3 of the visible band, so the true ratio is 3/3 vs 1/3 passed, which is obviously 3X. The NIR blocking filter is also surprisingly colored for most cameras, typically rather magenta. Thus, it imposes a significant color cast as well as blocking NIR. That color cast can easily eat 1 stop of dynamic range, so there's the additional 2X to give 6X improvement overall (and correction for the NIR-filter color cast is sometimes hardwired in the camera pipeline, so removing the filter might still give a drop in dynamic range). However, it's only 3X vs. a typical CFA without an NIR-blocking filter and not all NIR-blocking filters have such a heavy color cast. The CFA filters themselves might block more UV and NIR, but most only reduce NIR a little bit for red and blue (green tends to not let as much NIR pass)... however, as I indicated, the trend has been for blue and red CFAs to pass less NIR in newer cameras.

In sum, it really depends on the particular camera CFA and NIR blocking filter; CFAs don't all have identical spectral profiles for red, green, and blue. Some cameras don't even use red, green, and blue CFA; for example, my old Canon PowerShot G1 uses cyan, magenta, green, and yellow, which means 3/4 of the pixels see 2/3 of the visible wavelengths rather than 1/3 -- i.e., it's one stop brighter. All these camera-model-specific variations are why there is so much fuss about color science. 

BTW, just to be precise, by NUV I mean short-wavelength visible into the long end of UVA (400-315nm). Most CMOS sensors don't see UVB (315-280nm) and virtually none see UVC (<280nm). Very few lenses pass and focus much UVA, and it takes special optics for UVB or UVC, and even air blocks lots of UVC. UVA is the range most commonly used for UV. It's also worth mention that anything much past blue (i.e., violet) is out of gamut for many color encoding standards, so strange things happen to such colors recorded in image files... whereas NIR tends to be relatively well approximated by deep reds.

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
OP MacM545 Contributing Member • Posts: 783
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

I thought more about the smaller pixels being less sensitive to IR. How is it possible, if Infrared is simply longer wavelength than visible light? Keyword length, in wavelength, not size. Is there any website to prove this?

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
2

MacM545 wrote:

I thought more about the smaller pixels being less sensitive to IR. How is it possible, if Infrared is simply longer wavelength than visible light? Keyword length, in wavelength, not size. Is there any website to prove this?

Traditional transmission drops to near zero when the "light doesn't fit through the hole" -- and that happens when the diameter of the hole is less than the wavelength of the light. Basically, the light diffracts every direction upon hitting a too-small aperture. The more formal wording of this is: "Generally when light of a certain wavelength falls on a subwavelength aperture, it is diffracted isotropically in all directions evenly, with minimal far-field transmission."

Classically, the center wavelengths for red, green, and blue are 600nm, 530nm, and 450nm; the typically-assumed UV limit is 375nm and NIR is 950nm. Samsung has gotten pixel pitch down to around 650nm., which clears the center wavelength for red, but down to about 740nm is classically still considered deep red, and Samsung's sensors can't see that, let alone longer NIR wavelengths.

There is a recently-discovered optical phenomenon called Extraordinary optical transmission (EOT), in which subwavelength apertures can actually have orders of magnitude higher transmission than the standard theory predicts. However, that's a weird and still poorly-understood effect involving surface plasmons, and is much more likely to be useful for optical interconnection networks than for optical imaging....

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
OP MacM545 Contributing Member • Posts: 783
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

I might get another camera to see if it's true in practice- whether IR sensitivity depends on pixel size. There are factors that need to be compensated for, such as equivalent aperture, focal length, etc. I'm no expert on this, but presumably, if that were true, then the sensitivity of visible light should also be affected. (it is, because the dynamic range is less, so I'm almost certain that's what it's about). For some reason, I got fascinated by the possibility.

As far as I know though, then the sensitivity for Red should be less than for blue for smaller pixels, so the spectral response on a smaller sensor with equivalent amount of pixels as that of a larger sensor is likely different. Presumably not easy to test unless it's a full spectrum camera, because of the various strengths of Hot Mirrors that might be in front of the sensor. Then there's the fact that the lens itself might have an IR coating.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
2

MacM545 wrote:

I might get another camera to see if it's true in practice- whether IR sensitivity depends on pixel size.

It absolutely does. However, only when pixels are smaller than the wavelength of light. Pixels on cameras are much bigger than that until you get to the insanely high pixel count (e.g., 100MP) tiny sensors designed for cell phones. Even 20MP MFT cameras have pixels that are 3360nm  (i.e., 3.36 microns) wide, as opposed to the 650nm on some state-of-the-art cell phone sensors.

The reason those 3360nm pixels don't see NIR beyond about 1100nm isn't pixel size, but the fact that longer wavelength photons each carry less energy, so they don't deliver enough energy to trigger storage of a unit charge in the pixel. There's also a limit on the minimum wavelength photon that can store a unit of charge (usually around 340nm), and that also comes from semiconductor material properties rather than pixel size.

There are factors that need to be compensated for, such as equivalent aperture, focal length, etc. I'm no expert on this, but presumably, if that were true, then the sensitivity of visible light should also be affected. (it is, because the dynamic range is less, so I'm almost certain that's what it's about). For some reason, I got fascinated by the possibility.

Compensation is not important because most of the visible light spectrum has a short enough wavelength to fit in a 650nm pixel opening.

I don't know what you mean by "because the dynamic range is less" -- I can't think of any context where that claim makes sense here.

As far as I know though, then the sensitivity for Red should be less than for blue for smaller pixels, so the spectral response on a smaller sensor with equivalent amount of pixels as that of a larger sensor is likely different.

If we ignore EOT (which we definitely should), it's a simple cut off. As a first-order approximation, think about photons being thrown at a pixel like you think of shooting balls through a basketball hoop: tennis balls (blue photons), soccer balls (green photons), and basketballs (red photons) all easily pass through if correctly aimed, but a beach ball (deep red or NIR photon) is too large to fit through, so it just bounces off and will not go through the hoop.

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
OP MacM545 Contributing Member • Posts: 783
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

ProfHankD wrote:

MacM545 wrote:

I might get another camera to see if it's true in practice- whether IR sensitivity depends on pixel size.

It absolutely does. However, only when pixels are smaller than the wavelength of light. Pixels on cameras are much bigger than that until you get to the insanely high pixel count (e.g., 100MP) tiny sensors designed for cell phones. Even 20MP MFT cameras have pixels that are 3360nm (i.e., 3.36 microns) wide, as opposed to the 650nm on some state-of-the-art cell phone sensors.

The reason those 3360nm pixels don't see NIR beyond about 1100nm isn't pixel size, but the fact that longer wavelength photons each carry less energy, so they don't deliver enough energy to trigger storage of a unit charge in the pixel. There's also a limit on the minimum wavelength photon that can store a unit of charge (usually around 340nm), and that also comes from semiconductor material properties rather than pixel size.

There are factors that need to be compensated for, such as equivalent aperture, focal length, etc. I'm no expert on this, but presumably, if that were true, then the sensitivity of visible light should also be affected. (it is, because the dynamic range is less, so I'm almost certain that's what it's about). For some reason, I got fascinated by the possibility.

Compensation is not important because most of the visible light spectrum has a short enough wavelength to fit in a 650nm pixel opening.

I don't know what you mean by "because the dynamic range is less" -- I can't think of any context where that claim makes sense here.

As far as I know though, then the sensitivity for Red should be less than for blue for smaller pixels, so the spectral response on a smaller sensor with equivalent amount of pixels as that of a larger sensor is likely different.

If we ignore EOT (which we definitely should), it's a simple cut off. As a first-order approximation, think about photons being thrown at a pixel like you think of shooting balls through a basketball hoop: tennis balls (blue photons), soccer balls (green photons), and basketballs (red photons) all easily pass through if correctly aimed, but a beach ball (deep red or NIR photon) is too large to fit through, so it just bounces off and will not go through the hoop.

Interesting. I've heard of cell phones that can achieve over 100 mp, but that has to do with pixel binning so I'm not sure that's the same as actual native 100 mp. Unless there are some in existence, which is possible.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
SterlingBjorndahl Senior Member • Posts: 2,638
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
1

MacM545 wrote:

Interesting. I've heard of cell phones that can achieve over 100 mp, but that has to do with pixel binning so I'm not sure that's the same as actual native 100 mp.

I think the term "pixel binning" works the other way around; e.g., you start with 100M physical pixels and combine or "bin" them together four by four, so you end up with a 25Mp image.

You seem quite keen on this. If there's a university or college near you that has a class on Optics (sometimes part of the Physics program), you may be able to sign up to audit it so you can plumb these depths even further. If there's a lab with an optical bench you might have a lot of fun!

Best wishes,
Sterling
--
Lens Grit

 SterlingBjorndahl's gear list:SterlingBjorndahl's gear list
Olympus Air Panasonic Lumix DMC-GX85 Panasonic Leica D Vario-Elmar 14-150mm F3.5-5.6 Asph Mega OIS Panasonic Lumix G X Vario PZ 14-42mm F3.5-5.6 ASPH OIS Panasonic Lumix DMC-FZ50 +18 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
University courses
1

SterlingBjorndahl wrote:

MacM545 wrote:

Interesting. I've heard of cell phones that can achieve over 100 mp, but that has to do with pixel binning so I'm not sure that's the same as actual native 100 mp.

I think the term "pixel binning" works the other way around; e.g., you start with 100M physical pixels and combine or "bin" them together four by four, so you end up with a 25Mp image.

Precisely. BTW, the "dual pixel" stuff subdivides pixels into left/right pairs, so relative to wavelengths, a 50MP dual pixel is sort-of like 100MP... but not exactly, because the dual pixel components are not square: one dimension is twice the other. Honestly, I'm not sure exactly what that does spectrally if the wavelength is between those two dimensions....

You seem quite keen on this. If there's a university or college near you that has a class on Optics (sometimes part of the Physics program),

I agree taking classes could be a good answer for MacM545, but it is a bit of a problem in this field. Classical optics really are not a research issue these days, so it isn't common to see university classes in it. Camera sensors and electronics are very specialized topics not really overviewed in many courses, although there are individual courses about specific pieces.

Electrical engineering tends to "own" anything having to do with electromagnetic (EM) radiation, which is what photons are, so you will see EE classes that discuss optics and photonics, but usually almost nothing about cameras and their lenses. They discuss more of things like optics for imposing patterns on nanostructures (e.g., how to make chips) or things using surface plasmons. There are also some courses on solar cells, which are closely related to imaging sensors. There's generally a bit of a prerequisite chain heavy on the usual EE stuff.

There are a few exceptions, usually in computer/electrical engineering, where there are courses specifically about cameras. For example, at the University of Kentucky, I sometimes teach a "Cameras as Computing Systems" course which is dealing with how to build and program cameras (I've been busy with other things and haven't taught it for a few years). Prereqs tend to be computer programming and/or signal processing for such things -- mine requires C/C++ programming.

Most intro physics courses cover some basic geometric optics. However, if you look for physics professors researching lenses, these days it's usually stuff like gravitational lensing... which isn't even close.

you may be able to sign up to audit it so you can plumb these depths even further. If there's a lab with an optical bench you might have a lot of fun!

At many state universities, older folks can take classes for free. Here's the policy for the University of Kentucky , which basically gives anyone 65 or older who is a resident of Kentucky an opportunity to take classes, or even earn degrees, with no tuition nor fees.

You're probably wondering how often we see seniors talking courses like that. It's not very common in engineering because of relatively long prerequisite chains, but my impression is that it's actually fairly common in things like fine arts. For example, if you wanted to take an intro class in photography or other forms of art, there are often literally no prerequisites.

It's not particularly weird for older folks to be taking undergraduate classes, even in engineering. Over the years, I've had quite a few older folks in my courses. In fact, there is a generic name for such folks: retreads -- people who were already very successful in one field, but felt a bit worn out doing the same old things, and so decided to make things interesting again by adding new abilities in a related field. Many were electrical engineers who late in their careers decided they wanted to learn more about computer engineering, and often their employer would pay for the courses and give them time off to take them.

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
OP MacM545 Contributing Member • Posts: 783
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?

SterlingBjorndahl wrote:

MacM545 wrote:

Interesting. I've heard of cell phones that can achieve over 100 mp, but that has to do with pixel binning so I'm not sure that's the same as actual native 100 mp.

I think the term "pixel binning" works the other way around; e.g., you start with 100M physical pixels and combine or "bin" them together four by four, so you end up with a 25Mp image.

You seem quite keen on this. If there's a university or college near you that has a class on Optics (sometimes part of the Physics program), you may be able to sign up to audit it so you can plumb these depths even further. If there's a lab with an optical bench you might have a lot of fun!

Best wishes,
Sterling
--
Lens Grit

I meant oversampling. I simply used "pixel binning" because the exact wording wasn't on my mind at the time, so I assumed that you'd know what I meant. It does seem quite fascinating. The idea for taking the physics program.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
OP MacM545 Contributing Member • Posts: 783
Re: University courses

ProfHankD wrote:

SterlingBjorndahl wrote:

MacM545 wrote:

Interesting. I've heard of cell phones that can achieve over 100 mp, but that has to do with pixel binning so I'm not sure that's the same as actual native 100 mp.

I think the term "pixel binning" works the other way around; e.g., you start with 100M physical pixels and combine or "bin" them together four by four, so you end up with a 25Mp image.

Precisely. BTW, the "dual pixel" stuff subdivides pixels into left/right pairs, so relative to wavelengths, a 50MP dual pixel is sort-of like 100MP... but not exactly, because the dual pixel components are not square: one dimension is twice the other. Honestly, I'm not sure exactly what that does spectrally if the wavelength is between those two dimensions....

You seem quite keen on this. If there's a university or college near you that has a class on Optics (sometimes part of the Physics program),

I agree taking classes could be a good answer for MacM545, but it is a bit of a problem in this field. Classical optics really are not a research issue these days, so it isn't common to see university classes in it. Camera sensors and electronics are very specialized topics not really overviewed in many courses, although there are individual courses about specific pieces.

Electrical engineering tends to "own" anything having to do with electromagnetic (EM) radiation, which is what photons are, so you will see EE classes that discuss optics and photonics, but usually almost nothing about cameras and their lenses. They discuss more of things like optics for imposing patterns on nanostructures (e.g., how to make chips) or things using surface plasmons. There are also some courses on solar cells, which are closely related to imaging sensors. There's generally a bit of a prerequisite chain heavy on the usual EE stuff.

There are a few exceptions, usually in computer/electrical engineering, where there are courses specifically about cameras. For example, at the University of Kentucky, I sometimes teach a "Cameras as Computing Systems" course which is dealing with how to build and program cameras (I've been busy with other things and haven't taught it for a few years). Prereqs tend to be computer programming and/or signal processing for such things -- mine requires C/C++ programming.

Most intro physics courses cover some basic geometric optics. However, if you look for physics professors researching lenses, these days it's usually stuff like gravitational lensing... which isn't even close.

you may be able to sign up to audit it so you can plumb these depths even further. If there's a lab with an optical bench you might have a lot of fun!

At many state universities, older folks can take classes for free. Here's the policy for the University of Kentucky , which basically gives anyone 65 or older who is a resident of Kentucky an opportunity to take classes, or even earn degrees, with no tuition nor fees.

You're probably wondering how often we see seniors talking courses like that. It's not very common in engineering because of relatively long prerequisite chains, but my impression is that it's actually fairly common in things like fine arts. For example, if you wanted to take an intro class in photography or other forms of art, there are often literally no prerequisites.

It's not particularly weird for older folks to be taking undergraduate classes, even in engineering. Over the years, I've had quite a few older folks in my courses. In fact, there is a generic name for such folks: retreads -- people who were already very successful in one field, but felt a bit worn out doing the same old things, and so decided to make things interesting again by adding new abilities in a related field. Many were electrical engineers who late in their careers decided they wanted to learn more about computer engineering, and often their employer would pay for the courses and give them time off to take them.

Increasingly smaller pixels in relation to Infrared sensitivity. If the relationship can be illustrated, then would the difference be gradual or would there be a sudden transition? For example, at one pixel size, pixels might not be able to record IR, while anything larger would be able to record a great number of photons? If such is true, then what can this knowledge infer about Red, Green, Blue, and even shorter wavelengths? For example, would even smaller pixels be possibly opaque to visible wavelengths? Presumably, at some time in the future, resolution for a given sensor might reach its limit to Infrared. Based on your information, it will happen likely within this year if not already. Therefore, the resolution for a given sensor might also quickly reach its limit for normal photography as it might become opaque. This reduction in light sensitivity is likely to become noticeable in some form, but I'm not sure how. Smaller pixels are less sensitive to light, so when the ISO is above base, more noise can become visible than for larger pixels, but at base ISO, the difference can be noticeable when lifting shadows. I've started to understand that dynamic range is about lifting shadows while graininess at high ISO is about light sensitivity. My understanding about Dynamic range and light sensitivity of cameras has come from information also from Dpreview. I can understand that dynamic range and noise level isn't exactly the same, but it's been confusing me, such as after learning about ISO invariance.

 MacM545's gear list:MacM545's gear list
Sony RX100 II Canon EOS 500D Fujifilm X-T2 Canon EF-S 18-55mm f/3.5-5.6 Fujifilm 50-230mm II +1 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: University courses
1

MacM545 wrote:

Increasingly smaller pixels in relation to Infrared sensitivity. If the relationship can be illustrated, then would the difference be gradual or would there be a sudden transition? For example, at one pixel size, pixels might not be able to record IR, while anything larger would be able to record a great number of photons?

It's a fairly hard limit (ignoring EOT). There might be some gradient having to do with effective pixel aperture off-axis, microlens effects, etc.

If such is true, then what can this knowledge infer about Red, Green, Blue, and even shorter wavelengths? For example, would even smaller pixels be possibly opaque to visible wavelengths?

Ignoring EOT, yes. Remember visible photons are just a band of EM, so pixel apertures work a bit like the mesh of a Faraday cage . Of course, it's really more complicated than that, but 1st-order approximation....

Presumably, at some time in the future, resolution for a given sensor might reach its limit to Infrared. Based on your information, it will happen likely within this year if not already. Therefore, the resolution for a given sensor might also quickly reach its limit for normal photography as it might become opaque.

Yes, for high-end cell phone sensors we're basically there. Higher spatial resolution is still theoretically possible, would probably require using EOT or some other exotic physics.

This reduction in light sensitivity is likely to become noticeable in some form, but I'm not sure how. Smaller pixels are less sensitive to light, so when the ISO is above base, more noise can become visible than for larger pixels, but at base ISO, the difference can be noticeable when lifting shadows. I've started to understand that dynamic range is about lifting shadows while graininess at high ISO is about light sensitivity.

Actually, high ISO noise is now often about the light itself being grainy: photon shot noise . Many image pipelines are doing quite a bit of noise reduction, and even "raw" images are often fairly "cooked."

My understanding about Dynamic range and light sensitivity of cameras has come from information also from Dpreview. I can understand that dynamic range and noise level isn't exactly the same, but it's been confusing me, such as after learning about ISO invariance.

A lot of the problem with the relationship between dynamic range and SNR measurements is that the weakest recordable signal can be less or greater than the noise floor. For example, old reel-to-reel audio tape could record signals below the noise floor, so you could hear really quiet passages despite also hearing a lot of hiss louder than the signal. Many other audio recording media don't work like that; the signal is gone before you start hearing noise.

ISO invariance is really not a good term because in fact IQ varies a lot with ISO setting on most cameras. The popular term before that was "ISO-less," which really is more technically correct, but I view it as being about ISO not being an input to the exposure computations. My Electronic Imaging 2015 paper on this is ISO-less?  , and the PDF slides are here . The quick summary is that you are often able to capture more of the scene dynamic range if you under-amplify the analog signals from the sensor pixels before the ADCs and then re-map tones in digital processing.

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
Bernard Delley Senior Member • Posts: 2,041
red and IR sensor response considerations

ProfHankD wrote:

Ignoring EOT, yes. Remember visible photons are just a band of EM, so pixel apertures work a bit like the mesh of a Faraday cage . Of course, it's really more complicated than that, but 1st-order approximation....

what is EOT again ?

Interesting argument Faraday cage! -- I know what a (mesh) Faraday cage is, but I am skeptical. Current sensors may be of back side illuminated type, so there may be no metal mesh at all upfront. Also, it would be near field at the mesh, not as in your original argument far field. (I agree with strong far field attenuation, if the mesh is fine enough)

The small IR absorption coefficent of Si, decreasing strongly approaching the indirect band-gap for lower energy IR photons, requires a sufficiently thick Si layer to yield appreciable IR response for the pixel. Smaller pixels may go with thinner silicon and may thus be less suited for IR response. It could well be that small pixel sensors are designed for red sensitivity roll off with a thin design. That would yield good color response without the need for the usual extraneous red-rollof and IR blocking filter.

 Bernard Delley's gear list:Bernard Delley's gear list
Olympus TG-6 Nikon D7200 Nikon D500 Nikon D850 Nikon Z7 II +17 more
ProfHankD
ProfHankD Veteran Member • Posts: 9,146
Re: red and IR sensor response considerations
1

Bernard Delley wrote:

ProfHankD wrote:

Ignoring EOT, yes. Remember visible photons are just a band of EM, so pixel apertures work a bit like the mesh of a Faraday cage . Of course, it's really more complicated than that, but 1st-order approximation....

what is EOT again ?

Extraordinary Optical Transmission -- a strange surface plasmon effect that somehow lets photons get through subwavelength holes in regular grids.

Interesting argument Faraday cage! -- I know what a (mesh) Faraday cage is, but I am skeptical. Current sensors may be of back side illuminated type, so there may be no metal mesh at all upfront. Also, it would be near field at the mesh, not as in your original argument far field. (I agree with strong far field attenuation, if the mesh is fine enough)

Fair enough. It probably was more correct for FSI, but BSI still needs to have an insulator grid. I'm really a computer engineering professor, and don't claim to know much about this in particular, although I have discussed EOT with a couple of my nanotechnology center colleagues on various occasions. Still, I'm not sure the analogy is completely valid, and that's why I put the last sentence disclaimer. 

The small IR absorption coefficent of Si, decreasing strongly approaching the indirect band-gap for lower energy IR photons, requires a sufficiently thick Si layer to yield appreciable IR response for the pixel. Smaller pixels may go with thinner silicon and may thus be less suited for IR response. It could well be that small pixel sensors are designed for red sensitivity roll off with a thin design. That would yield good color response without the need for the usual extraneous red-rollof and IR blocking filter.

Basically, I think that's the real key: nobody is really looking at decreased NIR sensitivity as a disadvantage. In fact, it's been a multi-decade trend to be more aggressive about filtering-out NIR.

 ProfHankD's gear list:ProfHankD's gear list
Canon PowerShot SX530 Olympus TG-860 Sony a7R II Canon EOS 5D Mark IV Sony a6500 +32 more
Tony D Senior Member • Posts: 1,074
Re: What would a photo of a landscape look like if done using a Monochrome Full Spectrum camera?
1

Suggest you look at LifePixel's site

 Tony D's gear list:Tony D's gear list
Sony RX100 VI Canon EOS 5D Mark III Canon EOS M3 Canon EOS R Canon EOS RP +13 more
Keyboard shortcuts:
FForum MMy threads