OpticsEngineer
Veteran Member
is there a practical limit to the size of a pixel in a camera sensor?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Three practical limitations that come to mind areis there a practical limit to the size of a pixel in a camera sensor?
I like to think of a continuum (in 2-D) silicon surface where photons are realized and converted to photoelectrons (and photoholes). Pixels are the sampling sites of those photocarriers (which could be 2-D or 3-D). In this sense, there is no real limit down to near crystal lattice dimensions.Three practical limitations that come to mind areis there a practical limit to the size of a pixel in a camera sensor?
I think currently that boils down to about 0.5um pixels, though they tend to be used in binned configurations, so they are effectively 1 or 2 um pixels with the smaller actual size being used for additional benefits like OSPDAF and HDR.
- node size vs expected wavelength
- size of PSF vs size of pixel
- number of expected photons per pixel at expected exposures
In the limit there are jots.
Jack
I think the cones in the human retina go down to about half a micron.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction). What I tend to think is that the smaller size we target, the less we know today, but we may know something different tomorrow.
I'm with Dr. Fossum on this one: the limiting factor for 2D today has to only be the underlying structure size. Is that a hard limit? See above.
Finally, I'm of the belief that "more sampling" is technically always better, even if there is only an infinitesimal gain. That's one of the reasons why I love the jot idea.
I wonder if there are specialized "near-field" camera applications where diffraction plays out differently?I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction). What I tend to think is that the smaller size we target, the less we know today, but we may know something different tomorrow.
I'm with Dr. Fossum on this one: the limiting factor for 2D today has to only be the underlying structure size. Is that a hard limit? See above.
Finally, I'm of the belief that "more sampling" is technically always better, even if there is only an infinitesimal gain. That's one of the reasons why I love the jot idea.
I am not an expert in this field, but 0.5 µm sounds rather small. However I may be confusing cone size and cone spacing.I think the cones in the human retina go down to about half a micron.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction). What I tend to think is that the smaller size we target, the less we know today, but we may know something different tomorrow.
I'm with Dr. Fossum on this one: the limiting factor for 2D today has to only be the underlying structure size. Is that a hard limit? See above.
Finally, I'm of the belief that "more sampling" is technically always better, even if there is only an infinitesimal gain. That's one of the reasons why I love the jot idea.
Smaller in birds of prey.
https://oxfordre.com/neuroscience/d...01780F8607127EF4?rskey=Zxtqid&result=19&print
Don
These cone widths are indeed smaller than spacings in the CVRL database for human subjects, but not less than 0.5 µm.Neuroscience/Raptor Vision/Anatomy and Optics/Anatomical Spatial Resolution
... due to the wave nature of light, photoreceptors cannot function optimally when their diameters approach the wavelength of light. Indeed, the narrowest photoreceptors in the animal kingdom are about one micrometre wide (Land & Nilsson, 2012) and cones in wedge-tailed eagles (1.6 μm; Reymond, 1985), common buzzards, common kestrels (1.7 μm; Oehme, 1964), and brown falcons (Falco berigora)(1.8 μm; Reymond, 1987) are among the thinnest reported in birds to date.
Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
With cameras, one usually has the same number of pixels on the sensor and in the resulting image (although there are exceptions). However, with jots and very small pixels, number of pixels/jots on the sensor likely gets decoupled from pixel count of final image. So it might be meaningful to separate discussion about pixel count of the output image and about the pixel pitch of the sensor.I like to think of a continuum (in 2-D) silicon surface where photons are realized and converted to photoelectrons (and photoholes). Pixels are the sampling sites of those photocarriers (which could be 2-D or 3-D). In this sense, there is no real limit down to near crystal lattice dimensions.Three practical limitations that come to mind areis there a practical limit to the size of a pixel in a camera sensor?
I think currently that boils down to about 0.5um pixels, though they tend to be used in binned configurations, so they are effectively 1 or 2 um pixels with the smaller actual size being used for additional benefits like OSPDAF and HDR.
- node size vs expected wavelength
- size of PSF vs size of pixel
- number of expected photons per pixel at expected exposures
In the limit there are jots.
Jack
You asked about practical limits so I think it depends on what you want to achieve with your samples. In addition to what Jack mentioned there is the practicality of readout electronics pitch, and the off-chip-transmission-of-data bottleneck. The latter is of course relieved by on-chip signal and image processing. For photography, who could possibly need more than a million pixels? (old joke).
At the very recent Int. Image Sensor Workshop in Crieff Scotland last week, it appears that sub-0.5um pixels will be coming in the next 5 years perhaps. Pixel pitch of 0.56um seems well in hand. Interestingly, the early jot patents specify pixel pitch of less than 0.5um and they will expire at about the same time as sub 0.5um pixels may appear. Sometimes it is better to not be too early!
I assume the OP was talking about physical size of the pixel on the sensor - the sampling pitch, and not the number of output pixels in a final fully-processed photograph. But sure, we could talk about both, although scaling of pixel count up or down seems a matter of computation and not really about technology or device physics.With cameras, one usually has the same number of pixels on the sensor and in the resulting image (although there are exceptions). However, with jots and very small pixels, number of pixels/jots on the sensor likely gets decoupled from pixel count of final image. So it might be meaningful to separate discussion about pixel count of the output image and about the pixel pitch of the sensor.I like to think of a continuum (in 2-D) silicon surface where photons are realized and converted to photoelectrons (and photoholes). Pixels are the sampling sites of those photocarriers (which could be 2-D or 3-D). In this sense, there is no real limit down to near crystal lattice dimensions.Three practical limitations that come to mind areis there a practical limit to the size of a pixel in a camera sensor?
I think currently that boils down to about 0.5um pixels, though they tend to be used in binned configurations, so they are effectively 1 or 2 um pixels with the smaller actual size being used for additional benefits like OSPDAF and HDR.
- node size vs expected wavelength
- size of PSF vs size of pixel
- number of expected photons per pixel at expected exposures
In the limit there are jots.
Jack
You asked about practical limits so I think it depends on what you want to achieve with your samples. In addition to what Jack mentioned there is the practicality of readout electronics pitch, and the off-chip-transmission-of-data bottleneck. The latter is of course relieved by on-chip signal and image processing. For photography, who could possibly need more than a million pixels? (old joke).
At the very recent Int. Image Sensor Workshop in Crieff Scotland last week, it appears that sub-0.5um pixels will be coming in the next 5 years perhaps. Pixel pitch of 0.56um seems well in hand. Interestingly, the early jot patents specify pixel pitch of less than 0.5um and they will expire at about the same time as sub 0.5um pixels may appear. Sometimes it is better to not be too early!
Yes, OP asked about sensor but I wanted to point out this difference since I have learned that what people mean and what they write are two things that are not necessarily the same. Let's see if OP clarifies it.I assume the OP was talking about physical size of the pixel on the sensor - the sampling pitch, and not the number of output pixels in a final fully-processed photograph.With cameras, one usually has the same number of pixels on the sensor and in the resulting image (although there are exceptions). However, with jots and very small pixels, number of pixels/jots on the sensor likely gets decoupled from pixel count of final image. So it might be meaningful to separate discussion about pixel count of the output image and about the pixel pitch of the sensor.I like to think of a continuum (in 2-D) silicon surface where photons are realized and converted to photoelectrons (and photoholes). Pixels are the sampling sites of those photocarriers (which could be 2-D or 3-D). In this sense, there is no real limit down to near crystal lattice dimensions.Three practical limitations that come to mind areis there a practical limit to the size of a pixel in a camera sensor?
I think currently that boils down to about 0.5um pixels, though they tend to be used in binned configurations, so they are effectively 1 or 2 um pixels with the smaller actual size being used for additional benefits like OSPDAF and HDR.
- node size vs expected wavelength
- size of PSF vs size of pixel
- number of expected photons per pixel at expected exposures
In the limit there are jots.
Jack
You asked about practical limits so I think it depends on what you want to achieve with your samples. In addition to what Jack mentioned there is the practicality of readout electronics pitch, and the off-chip-transmission-of-data bottleneck. The latter is of course relieved by on-chip signal and image processing. For photography, who could possibly need more than a million pixels? (old joke).
At the very recent Int. Image Sensor Workshop in Crieff Scotland last week, it appears that sub-0.5um pixels will be coming in the next 5 years perhaps. Pixel pitch of 0.56um seems well in hand. Interestingly, the early jot patents specify pixel pitch of less than 0.5um and they will expire at about the same time as sub 0.5um pixels may appear. Sometimes it is better to not be too early!
Sure, you can adjust the pixel count by resampling afterwards but once you sample PSF of optics properly, there is no point in further increasing the pixel count of the output. Or is it? So this might result in a practical "limit" of pixel pitch, especially for large sensors. It seems to me that pixel pitches used in mobile phones are already beyond this practical limit for FF sensors.But sure, we could talk about both, although scaling of pixel count up or down seems a matter of computation and not really about technology or device physics.
Isn't diffraction independent from pixel pitch?Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
One can have very small 1 um pixel pitch on a FF sensor but it would make sense only when shooting at f/2.8 or larger apertures. Also, the subject has to be very flat or quite far away. Otherwise, the resolution potential would be used only in a very small fraction of the image and the rest would consist of blurred out of focus areas anyway.
Yes, nonetheless there is an argument to be made that it is wasteful/unnecessary to sample an image on the sensing plane at a higher rate than some multiple of the highest spatial frequency present in it. And since today diffraction provides an upper physical limit to the highest spatial frequency acceptable for the setup, the two can become related that way.Isn't diffraction independent from pixel pitch?Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
One can have very small 1 um pixel pitch on a FF sensor but it would make sense only when shooting at f/2.8 or larger apertures. Also, the subject has to be very flat or quite far away. Otherwise, the resolution potential would be used only in a very small fraction of the image and the rest would consist of blurred out of focus areas anyway.
https://www.scantips.com/lights/diffraction.html
Exactly. If 1 um pixels are used at f/7 instead of 2 um ones, one needs to readout four times more pixels and then to process them and store them on a card. This makes the camera slower, more energy hungry and uses a lot more storage. But the resulting image will be virtually identical in sharpness/amount of detail and lack of aliasing to one captured by 2 um pixels. So you get some significant disadvantages but no real benefit (I assume here that a conventional pixel is used, not jots or similar).Yes, nonetheless there is an argument to be made that it is wasteful/unnecessary to sample an image on the sensing plane at a higher rate than some multiple of the highest spatial frequency present in it. And since today diffraction provides an upper physical limit to the highest spatial frequency acceptable for the setup, the two can become related that way.Isn't diffraction independent from pixel pitch?Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
One can have very small 1 um pixel pitch on a FF sensor but it would make sense only when shooting at f/2.8 or larger apertures. Also, the subject has to be very flat or quite far away. Otherwise, the resolution potential would be used only in a very small fraction of the image and the rest would consist of blurred out of focus areas anyway.
https://www.scantips.com/lights/diffraction.html
Jack
Wouldn't the 1um image have less aliasing and better post-processing capabilities (NR, transformations, cropping)?Exactly. If 1 um pixels are used at f/7 instead of 2 um ones, one needs to readout four times more pixels and then to process them and store them on a card. This makes the camera slower, more energy hungry and uses a lot more storage. But the resulting image will be virtually identical in sharpness/amount of detail and lack of aliasing to one captured by 2 um pixels. So you get some significant disadvantages but no real benefit (I assume here that a conventional pixel is used, not jots or similar).Yes, nonetheless there is an argument to be made that it is wasteful/unnecessary to sample an image on the sensing plane at a higher rate than some multiple of the highest spatial frequency present in it. And since today diffraction provides an upper physical limit to the highest spatial frequency acceptable for the setup, the two can become related that way.Isn't diffraction independent from pixel pitch?Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
One can have very small 1 um pixel pitch on a FF sensor but it would make sense only when shooting at f/2.8 or larger apertures. Also, the subject has to be very flat or quite far away. Otherwise, the resolution potential would be used only in a very small fraction of the image and the rest would consist of blurred out of focus areas anyway.
https://www.scantips.com/lights/diffraction.html
Jack
No, because there would be already no aliasing with 2 um pixels (at f/7). That means that sensor was able to properly sample all available information/frequencies coming from the lens at such aperture.Wouldn't the 1um image have less aliasing and better post-processing capabilities (NR, transformations, cropping)?Exactly. If 1 um pixels are used at f/7 instead of 2 um ones, one needs to readout four times more pixels and then to process them and store them on a card. This makes the camera slower, more energy hungry and uses a lot more storage. But the resulting image will be virtually identical in sharpness/amount of detail and lack of aliasing to one captured by 2 um pixels. So you get some significant disadvantages but no real benefit (I assume here that a conventional pixel is used, not jots or similar).Yes, nonetheless there is an argument to be made that it is wasteful/unnecessary to sample an image on the sensing plane at a higher rate than some multiple of the highest spatial frequency present in it. And since today diffraction provides an upper physical limit to the highest spatial frequency acceptable for the setup, the two can become related that way.Isn't diffraction independent from pixel pitch?Diffraction is a very relevant effect in this context. One needs a pixel pitch that is about 3.5x smaller (when expressed in microns) than f-number to extract resolution provided by diffraction limited optics (i.e. 2 um pixel pitch is sufficient for f/7 or smaller apertures). It also means that pixel pitch derived from diffraction is DOF and format dependent.I've learned not to rely on anyone who says there are limits ;~). Amongst other examples I remember conversations with camera designers at the turn of the century where they said that sensors would never go below 2 microns (they were worried about diffraction).
One can have very small 1 um pixel pitch on a FF sensor but it would make sense only when shooting at f/2.8 or larger apertures. Also, the subject has to be very flat or quite far away. Otherwise, the resolution potential would be used only in a very small fraction of the image and the rest would consist of blurred out of focus areas anyway.
https://www.scantips.com/lights/diffraction.html
Jack
At Electronic Imaging 2023 I heard of some new sensor tech that is explicitly using sub-wavelength sensel sizes that would be smaller than 0.5um, so probably not waiting long at all for that size in specialized configurations. (It was a multi-spectral sensor design.)At the very recent Int. Image Sensor Workshop in Crieff Scotland last week, it appears that sub-0.5um pixels will be coming in the next 5 years perhaps. Pixel pitch of 0.56um seems well in hand. Interestingly, the early jot patents specify pixel pitch of less than 0.5um and they will expire at about the same time as sub 0.5um pixels may appear.