Sensor size, complexity, pixel size and cost?

Hi,

It would be interesting to hear from folks knowledgeable about the cost aspects of modern sensor designs.

My understanding is that increasing the sensor size over proportionally expensive.

But, I would assume that increasing the number of pixels would not be very expensive.

Clearly, increasing the complexity will increase the probability of component failure, but I also think that bad pixels can be mapped out. The complexity of the non pixel components should probably increase with the square root of pixel components. Doubling the number of pixels increases number of columns with sqrt(2).

It seems that APS-C is pretty much stuck at 24 MP and 24x36 is moving slowly upwards from 36 MP.
  • There can be quite a few reasons for the slow development:
  • Optimum balance between noise and resolution may have been reached.
  • It may be that we get into diminishing returns with regard to resolution.
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
  • Cost obviously is a factor that plays an important role.
On the other hand, we get into a situation when medium format uses smaller pixel size than 24x36 mm, something like 3.8 microns on the 100 MP 44x33 mm sensor.

Any comments from sensor experts?

Best regards

Erik
Image sensors are not leading edge semiconductor products at a fundamental level (which is not to say that design teams are not doing innovative work). Most of the constraints under which they operate are commercial and depend very much on the market. So, here is the first thing to take into account:

The market for large-sensor still cameras (which includes all ILC cameras) is a very small part of the overall sensor market. The sensor companies will tend to priorities their R&D expenditure where they see the biggest return. Right now, that's the automotive market. Recently cars have become plastered with image senosors, and the image sensor companies are falling over themselves to bag a part of that new market. New markets are few and far between. Whilst they do that, they aren't putting so much into large-sensor still cameras, which is a moribund market.

So, the question a sensor company is making is whether it can make money on each new sensor product, and this in turn depends on its customers. We could take as a case study the Nikon D850. This uses a Sony IMX309 sensor with 46MP. The D800 and D810 both used an IMX094 sensor with 36MP, and that sensor has been a successful product for Sony, going into two of Sony imaging's cameras, one of Pentax's and becoming popular in the scientific and industrial markets. If you want to build a camera around it, you can buy one for around $200. Sony imaging no longer uses that sensor, having commissioned a BSI alternative, the IMX193, but the sensor was still selling in other markets, so left to itself, Sony has no reason to replace it. However, Nikon clearly needed an upgrade for the D850, anything less would be seen as not enough in market terms. Now Sony has a customer for an upgrade, and the customer will set the broad parameters of the new sensor. Panasonic also expresses interest in a FF sensor in the 46MP range. Now Sony has two customers, and replacing the IMX094 becomes commercially viable, and Sony goes ahead and produces the part. There is nothing new in this sensor, it is an assembly of technology Sony already has. In fact, it's not close to the limits of what they can do.

So, in brief, the progress of the sensors that we see in our cameras has slowed because our cameras represent a shrinking market, and the there are no longer string drivers for investment in it.

As far as the relative expense of larger sensors versus more pixels, that is a complex question. The reason for the price drop of large sensors is reasonably simple, they can make use of fab plants which are no longer suitable for the small sensors. Thus they extract value from what would otherwise be an idle asset. Given that the major cost of semiconductors is plant, not raw materials, this means that the relative cost of the large sensors for our small market has declined rapidly. In fact, that is precisely why all the action is at FF and larger, for the still camera market, they represent opportunities for new sales where there is little elsewhere, and the opportunity to extract additional value from old lines is also attractive.
Most of the investment in image sensors seems to be directed at readout speed and on-sensor PDAF (both for smart phones and regular cameras). IQ improvements at the hardware level are likely to be minimal.
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles, but the major impact is on readout speeds, either as one part of a stacked sensor architecture, or on its own, where it allows duplication of column lines and therefore deployment of several ADCs in each column. On sensor PDAF isn't, I think, a big receiver of development investment, simply because it doesn't require very much. By and large the silicon for PDAF sensors is identical to that for non-PDAF sensors, with the only differenced being either masking on the microlenses or profiling of those microlenses. In fact, it is frequently the case that there are PDAF and non-PDAF variants of the same sensor (for instance, the D850 and Z7). There have been many patents for PDAF sensors using specialised silicon, but the only one that has come to market is 'dual pixel' from Canon and Samsung, and that is effectively just doubling up pixels.
The rest of the R&D money is being spent on image processing software - mainly by the big phone companies who are acquiring boutique software companies at a fair old rate.

And as you say, the real market for large sensors is small, but if they can be built on existing lines, is that a real issue cost-wise? AFAIK, they are built using legacy processes that are nowhere near leading edge in CMOS terms, so capital costs are not huge.
There is NRE involved for each new sensor, regardless of the lines on which they are made. That NRE might be small where the sensor is assembled from existing libraries, which is why Sony can produce so many different product lines. However, if it requires the development of new subcomponents such as new pixel designs or ADC architectures (which will need the 'R' bit of 'R' and 'D'), it is likely to be quite large, which is why, these days it rarely happens, and why magic pixels which double quantum efficiency are increasingly unlikely. It's a situation which favours the position of large sensor vendors. Sony can amortise its new developments over many sensors addressing many markets. For a company even as large as Canon, each new development gets amortised over far fewer designs and sensors.
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
No, not at all...
of course it does. If nothing substantial can be had by reducing the pixel size further then it follows that a tc would offer nothing more than empty magnification at those pixel pitches.
I do not understand how you reach that conclusion. Let me oversimplify my argument.
  1. The average lens today is not ready for pixels smaller than about 3.5 microns if you wish to maintain the current level of aliasing and subjective sharpness
  2. These lenses are in the < ~$1,000 range.
  3. Lenses that take TCs are generally considerably better than average and correspondingly more expensive. No one would call a 400/2.8 or 200/2 an average lens. Not a 100-400 either.
Further, a TC will always give you more pixels on target, even if not "sharp" pixels. This helps with SNR and "smoothness" of the image.
I don't wish to maintain the current level of aliasing and subjective sharpness. I want no aliasing and all of what all of my lenses can give. All lenses from the cheapest kit zooms to expensive proffesional lenses can easely generate Moiré on 24MP APS-C and 50MP FF. I don't want it if I can be free from it.
Cool. The masses don't think the same way.
I'm not sure that it has very much to do with what 'the masses' think. How did you get your information, was there some market research done?

Rather, I think it as to do with what would be a commercially sensible path, as a camera company would see it.

Take, for example the 46MP sensor in the D850, 7D (and soon, no doubt, the Panasonic S1R). That has 4.35 micron pixels, which really aren't pushing any boundaries with respect to sensor technology. Sony's 1" sensors, which work very well, have 2.4 micron pixels. Had the sensor been made with those, it would have boasted 150MP. No problem whatsoever with the sensor, Sony can do it easily without breaking sweat. However, so far as the camera spec goes, it produces some problems. Frame rates will go down. Instead of producing a 7FPS camera, Nikon would have had a 2.3 FPS camera, which they might have felt to be less marketable. Next, in terms of marketing, give the consumer something more, they will grab it with open arms. Give them something so much more that they no longer understand it, you get sales resistance. So, whatever the masses think, they are likely to be offered what makes sense to the manufacturer, and that isn't a 150MP FF camera (yet).
I remember Eric Fossum saying he is expecting gigapixel sensors in cameras in his lifetime. So whether the masses want smaller pixels or not they will get it.

And I am betting the masses will be pleased :)
 
Last edited:
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
No, not at all...
of course it does. If nothing substantial can be had by reducing the pixel size further then it follows that a tc would offer nothing more than empty magnification at those pixel pitches.
I do not understand how you reach that conclusion. Let me oversimplify my argument.
  1. The average lens today is not ready for pixels smaller than about 3.5 microns if you wish to maintain the current level of aliasing and subjective sharpness
  2. These lenses are in the < ~$1,000 range.
  3. Lenses that take TCs are generally considerably better than average and correspondingly more expensive. No one would call a 400/2.8 or 200/2 an average lens. Not a 100-400 either.
Further, a TC will always give you more pixels on target, even if not "sharp" pixels. This helps with SNR and "smoothness" of the image.
I don't wish to maintain the current level of aliasing and subjective sharpness. I want no aliasing and all of what all of my lenses can give. All lenses from the cheapest kit zooms to expensive proffesional lenses can easely generate Moiré on 24MP APS-C and 50MP FF. I don't want it if I can be free from it.
Cool. The masses don't think the same way.
I'm not sure that it has very much to do with what 'the masses' think. How did you get your information, was there some market research done?

Rather, I think it as to do with what would be a commercially sensible path, as a camera company would see it.

Take, for example the 46MP sensor in the D850, 7D (and soon, no doubt, the Panasonic S1R). That has 4.35 micron pixels, which really aren't pushing any boundaries with respect to sensor technology. Sony's 1" sensors, which work very well, have 2.4 micron pixels. Had the sensor been made with those, it would have boasted 150MP. No problem whatsoever with the sensor, Sony can do it easily without breaking sweat. However, so far as the camera spec goes, it produces some problems. Frame rates will go down. Instead of producing a 7FPS camera, Nikon would have had a 2.3 FPS camera, which they might have felt to be less marketable. Next, in terms of marketing, give the consumer something more, they will grab it with open arms. Give them something so much more that they no longer understand it, you get sales resistance. So, whatever the masses think, they are likely to be offered what makes sense to the manufacturer, and that isn't a 150MP FF camera (yet).
I remember Eric Fossum saying he is expecting gigapixel sensors in cameras in his lifetime. So whether the masses want smaller pixels or not they will get it.
Yeah, but remember that Eric is a technical guy, not a business one (in the way that he looks at things, of course he is involved in business). Already there are commercial sensors with 0.8 micron pixels, put those on a FF, you get 1.35 GPixels, so it's already technically feasible, and with stacking you could likely even get the data off at some usable speed.
And I am betting the masses will be pleased :)
I'm thinking that the driver has already ceased to be IQ, but there are other reasons to increase pixel counts. As in Eric's SDL pixel work, more, single-bit pixels can have some interesting and beneficial consequences downstream.
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
No, not at all...
of course it does. If nothing substantial can be had by reducing the pixel size further then it follows that a tc would offer nothing more than empty magnification at those pixel pitches.
I do not understand how you reach that conclusion. Let me oversimplify my argument.
  1. The average lens today is not ready for pixels smaller than about 3.5 microns if you wish to maintain the current level of aliasing and subjective sharpness
  2. These lenses are in the < ~$1,000 range.
  3. Lenses that take TCs are generally considerably better than average and correspondingly more expensive. No one would call a 400/2.8 or 200/2 an average lens. Not a 100-400 either.
Further, a TC will always give you more pixels on target, even if not "sharp" pixels. This helps with SNR and "smoothness" of the image.
I don't wish to maintain the current level of aliasing and subjective sharpness. I want no aliasing and all of what all of my lenses can give. All lenses from the cheapest kit zooms to expensive proffesional lenses can easely generate Moiré on 24MP APS-C and 50MP FF. I don't want it if I can be free from it.
Cool. The masses don't think the same way.
I'm not sure that it has very much to do with what 'the masses' think. How did you get your information, was there some market research done?

Rather, I think it as to do with what would be a commercially sensible path, as a camera company would see it.

Take, for example the 46MP sensor in the D850, 7D (and soon, no doubt, the Panasonic S1R). That has 4.35 micron pixels, which really aren't pushing any boundaries with respect to sensor technology. Sony's 1" sensors, which work very well, have 2.4 micron pixels. Had the sensor been made with those, it would have boasted 150MP. No problem whatsoever with the sensor, Sony can do it easily without breaking sweat. However, so far as the camera spec goes, it produces some problems. Frame rates will go down. Instead of producing a 7FPS camera, Nikon would have had a 2.3 FPS camera, which they might have felt to be less marketable. Next, in terms of marketing, give the consumer something more, they will grab it with open arms. Give them something so much more that they no longer understand it, you get sales resistance. So, whatever the masses think, they are likely to be offered what makes sense to the manufacturer, and that isn't a 150MP FF camera (yet).
I remember Eric Fossum saying he is expecting gigapixel sensors in cameras in his lifetime. So whether the masses want smaller pixels or not they will get it.
Yeah, but remember that Eric is a technical guy, not a business one (in the way that he looks at things, of course he is involved in business). Already there are commercial sensors with 0.8 micron pixels, put those on a FF, you get 1.35 GPixels, so it's already technically feasible, and with stacking you could likely even get the data off at some usable speed.
but maybe he has a grasp of what the masses want (even if they don’t know)

I understand that the IMX586 will be launched in cell phones next year

i liked the idea put forward in the Nokia Pureview: oversample and output according to requirements
And I am betting the masses will be pleased :)
I'm thinking that the driver has already ceased to be IQ, but there are other reasons to increase pixel counts. As in Eric's SDL pixel work, more, single-bit pixels can have some interesting and beneficial consequences downstream.

--
Ride easy, William.
Bob
I disagree with you and AiryDiscus on this point: take a look at the dpr studio scene and see all the horrible aliasing.

I look forward to the days where TCs are obsolete
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
No, not at all...
of course it does. If nothing substantial can be had by reducing the pixel size further then it follows that a tc would offer nothing more than empty magnification at those pixel pitches.
I do not understand how you reach that conclusion. Let me oversimplify my argument.
  1. The average lens today is not ready for pixels smaller than about 3.5 microns if you wish to maintain the current level of aliasing and subjective sharpness
  2. These lenses are in the < ~$1,000 range.
  3. Lenses that take TCs are generally considerably better than average and correspondingly more expensive. No one would call a 400/2.8 or 200/2 an average lens. Not a 100-400 either.
Further, a TC will always give you more pixels on target, even if not "sharp" pixels. This helps with SNR and "smoothness" of the image.
I don't wish to maintain the current level of aliasing and subjective sharpness. I want no aliasing and all of what all of my lenses can give. All lenses from the cheapest kit zooms to expensive proffesional lenses can easely generate Moiré on 24MP APS-C and 50MP FF. I don't want it if I can be free from it.
Cool. The masses don't think the same way.
I'm not sure that it has very much to do with what 'the masses' think. How did you get your information, was there some market research done?

Rather, I think it as to do with what would be a commercially sensible path, as a camera company would see it.

Take, for example the 46MP sensor in the D850, 7D (and soon, no doubt, the Panasonic S1R). That has 4.35 micron pixels, which really aren't pushing any boundaries with respect to sensor technology. Sony's 1" sensors, which work very well, have 2.4 micron pixels. Had the sensor been made with those, it would have boasted 150MP. No problem whatsoever with the sensor, Sony can do it easily without breaking sweat. However, so far as the camera spec goes, it produces some problems. Frame rates will go down. Instead of producing a 7FPS camera, Nikon would have had a 2.3 FPS camera, which they might have felt to be less marketable. Next, in terms of marketing, give the consumer something more, they will grab it with open arms. Give them something so much more that they no longer understand it, you get sales resistance. So, whatever the masses think, they are likely to be offered what makes sense to the manufacturer, and that isn't a 150MP FF camera (yet).
I remember Eric Fossum saying he is expecting gigapixel sensors in cameras in his lifetime. So whether the masses want smaller pixels or not they will get it.
Yeah, but remember that Eric is a technical guy, not a business one (in the way that he looks at things, of course he is involved in business). Already there are commercial sensors with 0.8 micron pixels, put those on a FF, you get 1.35 GPixels, so it's already technically feasible, and with stacking you could likely even get the data off at some usable speed.
but maybe he has a grasp of what the masses want (even if they don’t know)
My thought is that the masses tend to want what is presented to them in successful marketing campaigns. Generally, if you do a bit of hindcasting with respect to what masses said they wanted a few years ago, it often turns out to be very different to what they turn out to want when it actually comes to it. Fickle people, these masses.
I understand that the IMX586 will be launched in cell phones next year
We already had 42MP in a phone (I see you referenced below)
i liked the idea put forward in the Nokia Pureview: oversample and output according to requirements
Technically, it's a very sound way of going about things.
And I am betting the masses will be pleased :)
I'm thinking that the driver has already ceased to be IQ, but there are other reasons to increase pixel counts. As in Eric's SDL pixel work, more, single-bit pixels can have some interesting and beneficial consequences downstream.
I disagree with you and AiryDiscus on this point: take a look at the dpr studio scene and see all the horrible aliasing.
I'm not disagreeing with you. I also would like to see increased pixel counts and the end of aliasing. I was speaking of a commercial driver, not a technical one. I think the verdict is in, and those masses actually like aliasing.
I look forward to the days where TCs are obsolete
Wanting TCs to be obsolete is a very esoteric wish (but one I share). I can't see much commercial headway to be made by catering for it.
 
Hi,

It would be interesting to hear from folks knowledgeable about the cost aspects of modern sensor designs.

My understanding is that increasing the sensor size over proportionally expensive.

But, I would assume that increasing the number of pixels would not be very expensive.

Clearly, increasing the complexity will increase the probability of component failure, but I also think that bad pixels can be mapped out. The complexity of the non pixel components should probably increase with the square root of pixel components. Doubling the number of pixels increases number of columns with sqrt(2).

It seems that APS-C is pretty much stuck at 24 MP and 24x36 is moving slowly upwards from 36 MP.
  • There can be quite a few reasons for the slow development:
  • Optimum balance between noise and resolution may have been reached.
  • It may be that we get into diminishing returns with regard to resolution.
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
  • Cost obviously is a factor that plays an important role.
On the other hand, we get into a situation when medium format uses smaller pixel size than 24x36 mm, something like 3.8 microns on the 100 MP 44x33 mm sensor.

Any comments from sensor experts?

Best regards

Erik
Image sensors are not leading edge semiconductor products at a fundamental level (which is not to say that design teams are not doing innovative work). Most of the constraints under which they operate are commercial and depend very much on the market. So, here is the first thing to take into account:

The market for large-sensor still cameras (which includes all ILC cameras) is a very small part of the overall sensor market. The sensor companies will tend to priorities their R&D expenditure where they see the biggest return. Right now, that's the automotive market. Recently cars have become plastered with image senosors, and the image sensor companies are falling over themselves to bag a part of that new market. New markets are few and far between. Whilst they do that, they aren't putting so much into large-sensor still cameras, which is a moribund market.

So, the question a sensor company is making is whether it can make money on each new sensor product, and this in turn depends on its customers. We could take as a case study the Nikon D850. This uses a Sony IMX309 sensor with 46MP. The D800 and D810 both used an IMX094 sensor with 36MP, and that sensor has been a successful product for Sony, going into two of Sony imaging's cameras, one of Pentax's and becoming popular in the scientific and industrial markets. If you want to build a camera around it, you can buy one for around $200. Sony imaging no longer uses that sensor, having commissioned a BSI alternative, the IMX193, but the sensor was still selling in other markets, so left to itself, Sony has no reason to replace it. However, Nikon clearly needed an upgrade for the D850, anything less would be seen as not enough in market terms. Now Sony has a customer for an upgrade, and the customer will set the broad parameters of the new sensor. Panasonic also expresses interest in a FF sensor in the 46MP range. Now Sony has two customers, and replacing the IMX094 becomes commercially viable, and Sony goes ahead and produces the part. There is nothing new in this sensor, it is an assembly of technology Sony already has. In fact, it's not close to the limits of what they can do.

So, in brief, the progress of the sensors that we see in our cameras has slowed because our cameras represent a shrinking market, and the there are no longer string drivers for investment in it.

As far as the relative expense of larger sensors versus more pixels, that is a complex question. The reason for the price drop of large sensors is reasonably simple, they can make use of fab plants which are no longer suitable for the small sensors. Thus they extract value from what would otherwise be an idle asset. Given that the major cost of semiconductors is plant, not raw materials, this means that the relative cost of the large sensors for our small market has declined rapidly. In fact, that is precisely why all the action is at FF and larger, for the still camera market, they represent opportunities for new sales where there is little elsewhere, and the opportunity to extract additional value from old lines is also attractive.
Most of the investment in image sensors seems to be directed at readout speed and on-sensor PDAF (both for smart phones and regular cameras). IQ improvements at the hardware level are likely to be minimal.
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles, but the major impact is on readout speeds, either as one part of a stacked sensor architecture, or on its own, where it allows duplication of column lines and therefore deployment of several ADCs in each column. On sensor PDAF isn't, I think, a big receiver of development investment, simply because it doesn't require very much. By and large the silicon for PDAF sensors is identical to that for non-PDAF sensors, with the only differenced being either masking on the microlenses or profiling of those microlenses. In fact, it is frequently the case that there are PDAF and non-PDAF variants of the same sensor (for instance, the D850 and Z7). There have been many patents for PDAF sensors using specialised silicon, but the only one that has come to market is 'dual pixel' from Canon and Samsung, and that is effectively just doubling up pixels.
The rest of the R&D money is being spent on image processing software - mainly by the big phone companies who are acquiring boutique software companies at a fair old rate.

And as you say, the real market for large sensors is small, but if they can be built on existing lines, is that a real issue cost-wise? AFAIK, they are built using legacy processes that are nowhere near leading edge in CMOS terms, so capital costs are not huge.
There is NRE involved for each new sensor, regardless of the lines on which they are made. That NRE might be small where the sensor is assembled from existing libraries, which is why Sony can produce so many different product lines. However, if it requires the development of new subcomponents such as new pixel designs or ADC architectures (which will need the 'R' bit of 'R' and 'D'), it is likely to be quite large, which is why, these days it rarely happens, and why magic pixels which double quantum efficiency are increasingly unlikely. It's a situation which favours the position of large sensor vendors. Sony can amortise its new developments over many sensors addressing many markets. For a company even as large as Canon, each new development gets amortised over far fewer designs and sensors.
That makes sense, but it's worrying that so much relies on a single vendor.

I also understand that there are a few boutique sensor design consultancies that license IP, such as Aptina's dual conversion gain architecture, which was adopted by Sony.

I suppose the other question is do we really NEED a whole lot more 'image quality' or has the market really moved on? We seem to have well defined market segmentation for action (fast readout, large pixel) general (moderate readout midsize pixel) and landscape (slow readout, small pixel).

Does IQ really sell cameras, or just specs?
 
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
I'm not a sensor expert but I do know about A/D converters. The higher the resolution (pixel count), the longer it takes for the A/D to convert all those analog pixel readings into their digital values, which affects shooting rate and power dissipation.
 
Last edited:
Hi,

It would be interesting to hear from folks knowledgeable about the cost aspects of modern sensor designs.

My understanding is that increasing the sensor size over proportionally expensive.

But, I would assume that increasing the number of pixels would not be very expensive.

Clearly, increasing the complexity will increase the probability of component failure, but I also think that bad pixels can be mapped out. The complexity of the non pixel components should probably increase with the square root of pixel components. Doubling the number of pixels increases number of columns with sqrt(2).

It seems that APS-C is pretty much stuck at 24 MP and 24x36 is moving slowly upwards from 36 MP.
  • There can be quite a few reasons for the slow development:
  • Optimum balance between noise and resolution may have been reached.
  • It may be that we get into diminishing returns with regard to resolution.
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
  • Cost obviously is a factor that plays an important role.
On the other hand, we get into a situation when medium format uses smaller pixel size than 24x36 mm, something like 3.8 microns on the 100 MP 44x33 mm sensor.

Any comments from sensor experts?

Best regards

Erik
Image sensors are not leading edge semiconductor products at a fundamental level (which is not to say that design teams are not doing innovative work). Most of the constraints under which they operate are commercial and depend very much on the market. So, here is the first thing to take into account:

The market for large-sensor still cameras (which includes all ILC cameras) is a very small part of the overall sensor market. The sensor companies will tend to priorities their R&D expenditure where they see the biggest return. Right now, that's the automotive market. Recently cars have become plastered with image senosors, and the image sensor companies are falling over themselves to bag a part of that new market. New markets are few and far between. Whilst they do that, they aren't putting so much into large-sensor still cameras, which is a moribund market.

So, the question a sensor company is making is whether it can make money on each new sensor product, and this in turn depends on its customers. We could take as a case study the Nikon D850. This uses a Sony IMX309 sensor with 46MP. The D800 and D810 both used an IMX094 sensor with 36MP, and that sensor has been a successful product for Sony, going into two of Sony imaging's cameras, one of Pentax's and becoming popular in the scientific and industrial markets. If you want to build a camera around it, you can buy one for around $200. Sony imaging no longer uses that sensor, having commissioned a BSI alternative, the IMX193, but the sensor was still selling in other markets, so left to itself, Sony has no reason to replace it. However, Nikon clearly needed an upgrade for the D850, anything less would be seen as not enough in market terms. Now Sony has a customer for an upgrade, and the customer will set the broad parameters of the new sensor. Panasonic also expresses interest in a FF sensor in the 46MP range. Now Sony has two customers, and replacing the IMX094 becomes commercially viable, and Sony goes ahead and produces the part. There is nothing new in this sensor, it is an assembly of technology Sony already has. In fact, it's not close to the limits of what they can do.

So, in brief, the progress of the sensors that we see in our cameras has slowed because our cameras represent a shrinking market, and the there are no longer string drivers for investment in it.

As far as the relative expense of larger sensors versus more pixels, that is a complex question. The reason for the price drop of large sensors is reasonably simple, they can make use of fab plants which are no longer suitable for the small sensors. Thus they extract value from what would otherwise be an idle asset. Given that the major cost of semiconductors is plant, not raw materials, this means that the relative cost of the large sensors for our small market has declined rapidly. In fact, that is precisely why all the action is at FF and larger, for the still camera market, they represent opportunities for new sales where there is little elsewhere, and the opportunity to extract additional value from old lines is also attractive.
Most of the investment in image sensors seems to be directed at readout speed and on-sensor PDAF (both for smart phones and regular cameras). IQ improvements at the hardware level are likely to be minimal.
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles, but the major impact is on readout speeds, either as one part of a stacked sensor architecture, or on its own, where it allows duplication of column lines and therefore deployment of several ADCs in each column. On sensor PDAF isn't, I think, a big receiver of development investment, simply because it doesn't require very much. By and large the silicon for PDAF sensors is identical to that for non-PDAF sensors, with the only differenced being either masking on the microlenses or profiling of those microlenses. In fact, it is frequently the case that there are PDAF and non-PDAF variants of the same sensor (for instance, the D850 and Z7). There have been many patents for PDAF sensors using specialised silicon, but the only one that has come to market is 'dual pixel' from Canon and Samsung, and that is effectively just doubling up pixels.
The rest of the R&D money is being spent on image processing software - mainly by the big phone companies who are acquiring boutique software companies at a fair old rate.

And as you say, the real market for large sensors is small, but if they can be built on existing lines, is that a real issue cost-wise? AFAIK, they are built using legacy processes that are nowhere near leading edge in CMOS terms, so capital costs are not huge.
There is NRE involved for each new sensor, regardless of the lines on which they are made. That NRE might be small where the sensor is assembled from existing libraries, which is why Sony can produce so many different product lines. However, if it requires the development of new subcomponents such as new pixel designs or ADC architectures (which will need the 'R' bit of 'R' and 'D'), it is likely to be quite large, which is why, these days it rarely happens, and why magic pixels which double quantum efficiency are increasingly unlikely. It's a situation which favours the position of large sensor vendors. Sony can amortise its new developments over many sensors addressing many markets. For a company even as large as Canon, each new development gets amortised over far fewer designs and sensors.
That makes sense, but it's worrying that so much relies on a single vendor.
That's the nature of capitalism, and what a company like Sony can do if it achieves a close to monopoly position.
I also understand that there are a few boutique sensor design consultancies that license IP,
Sure, plenty of them. And there are many companies making very specialist image sensors via those design houses for their own purposes. They serve the niche markets, but don;t expect those sensors to outperform what a Sony product can do in terms of general performance.
such as Aptina's dual conversion gain architecture, which was adopted by Sony.
That didn't come from a design consultancy, it came from a patent swap with Aptina, whereby Sony gave Aptina access to its patents and Aptina gave Sony access to its. The problem is that a whole load of sensor performance isn't anything to do with patentable IP, it's to do with relentless development and tuning.
I suppose the other question is do we really NEED a whole lot more 'image quality' or has the market really moved on? We seem to have well defined market segmentation for action (fast readout, large pixel) general (moderate readout midsize pixel) and landscape (slow readout, small pixel).

Does IQ really sell cameras, or just specs?
Nowadays, specs, I think. Unless you're very specialist, you probably have all the IQ you need.
 
Hi,

It would be interesting to hear from folks knowledgeable about the cost aspects of modern sensor designs.

My understanding is that increasing the sensor size over proportionally expensive.

But, I would assume that increasing the number of pixels would not be very expensive.

Clearly, increasing the complexity will increase the probability of component failure, but I also think that bad pixels can be mapped out. The complexity of the non pixel components should probably increase with the square root of pixel components. Doubling the number of pixels increases number of columns with sqrt(2).

It seems that APS-C is pretty much stuck at 24 MP and 24x36 is moving slowly upwards from 36 MP.
  • There can be quite a few reasons for the slow development:
  • Optimum balance between noise and resolution may have been reached.
  • It may be that we get into diminishing returns with regard to resolution.
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
  • Cost obviously is a factor that plays an important role.
On the other hand, we get into a situation when medium format uses smaller pixel size than 24x36 mm, something like 3.8 microns on the 100 MP 44x33 mm sensor.

Any comments from sensor experts?

Best regards

Erik
Image sensors are not leading edge semiconductor products at a fundamental level (which is not to say that design teams are not doing innovative work). Most of the constraints under which they operate are commercial and depend very much on the market. So, here is the first thing to take into account:

The market for large-sensor still cameras (which includes all ILC cameras) is a very small part of the overall sensor market. The sensor companies will tend to priorities their R&D expenditure where they see the biggest return. Right now, that's the automotive market. Recently cars have become plastered with image senosors, and the image sensor companies are falling over themselves to bag a part of that new market. New markets are few and far between. Whilst they do that, they aren't putting so much into large-sensor still cameras, which is a moribund market.

So, the question a sensor company is making is whether it can make money on each new sensor product, and this in turn depends on its customers. We could take as a case study the Nikon D850. This uses a Sony IMX309 sensor with 46MP. The D800 and D810 both used an IMX094 sensor with 36MP, and that sensor has been a successful product for Sony, going into two of Sony imaging's cameras, one of Pentax's and becoming popular in the scientific and industrial markets. If you want to build a camera around it, you can buy one for around $200. Sony imaging no longer uses that sensor, having commissioned a BSI alternative, the IMX193, but the sensor was still selling in other markets, so left to itself, Sony has no reason to replace it. However, Nikon clearly needed an upgrade for the D850, anything less would be seen as not enough in market terms. Now Sony has a customer for an upgrade, and the customer will set the broad parameters of the new sensor. Panasonic also expresses interest in a FF sensor in the 46MP range. Now Sony has two customers, and replacing the IMX094 becomes commercially viable, and Sony goes ahead and produces the part. There is nothing new in this sensor, it is an assembly of technology Sony already has. In fact, it's not close to the limits of what they can do.

So, in brief, the progress of the sensors that we see in our cameras has slowed because our cameras represent a shrinking market, and the there are no longer string drivers for investment in it.

As far as the relative expense of larger sensors versus more pixels, that is a complex question. The reason for the price drop of large sensors is reasonably simple, they can make use of fab plants which are no longer suitable for the small sensors. Thus they extract value from what would otherwise be an idle asset. Given that the major cost of semiconductors is plant, not raw materials, this means that the relative cost of the large sensors for our small market has declined rapidly. In fact, that is precisely why all the action is at FF and larger, for the still camera market, they represent opportunities for new sales where there is little elsewhere, and the opportunity to extract additional value from old lines is also attractive.
Most of the investment in image sensors seems to be directed at readout speed and on-sensor PDAF (both for smart phones and regular cameras). IQ improvements at the hardware level are likely to be minimal.
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles, but the major impact is on readout speeds, either as one part of a stacked sensor architecture, or on its own, where it allows duplication of column lines and therefore deployment of several ADCs in each column. On sensor PDAF isn't, I think, a big receiver of development investment, simply because it doesn't require very much. By and large the silicon for PDAF sensors is identical to that for non-PDAF sensors, with the only differenced being either masking on the microlenses or profiling of those microlenses. In fact, it is frequently the case that there are PDAF and non-PDAF variants of the same sensor (for instance, the D850 and Z7). There have been many patents for PDAF sensors using specialised silicon, but the only one that has come to market is 'dual pixel' from Canon and Samsung, and that is effectively just doubling up pixels.
The rest of the R&D money is being spent on image processing software - mainly by the big phone companies who are acquiring boutique software companies at a fair old rate.

And as you say, the real market for large sensors is small, but if they can be built on existing lines, is that a real issue cost-wise? AFAIK, they are built using legacy processes that are nowhere near leading edge in CMOS terms, so capital costs are not huge.
There is NRE involved for each new sensor, regardless of the lines on which they are made. That NRE might be small where the sensor is assembled from existing libraries, which is why Sony can produce so many different product lines. However, if it requires the development of new subcomponents such as new pixel designs or ADC architectures (which will need the 'R' bit of 'R' and 'D'), it is likely to be quite large, which is why, these days it rarely happens, and why magic pixels which double quantum efficiency are increasingly unlikely. It's a situation which favours the position of large sensor vendors. Sony can amortise its new developments over many sensors addressing many markets. For a company even as large as Canon, each new development gets amortised over far fewer designs and sensors.
That makes sense, but it's worrying that so much relies on a single vendor.
That's the nature of capitalism, and what a company like Sony can do if it achieves a close to monopoly position.
I also understand that there are a few boutique sensor design consultancies that license IP,
Sure, plenty of them. And there are many companies making very specialist image sensors via those design houses for their own purposes. They serve the niche markets, but don;t expect those sensors to outperform what a Sony product can do in terms of general performance.
such as Aptina's dual conversion gain architecture, which was adopted by Sony.
That didn't come from a design consultancy, it came from a patent swap with Aptina, whereby Sony gave Aptina access to its patents and Aptina gave Sony access to its. The problem is that a whole load of sensor performance isn't anything to do with patentable IP, it's to do with relentless development and tuning.
I suppose the other question is do we really NEED a whole lot more 'image quality' or has the market really moved on? We seem to have well defined market segmentation for action (fast readout, large pixel) general (moderate readout midsize pixel) and landscape (slow readout, small pixel).

Does IQ really sell cameras, or just specs?
Nowadays, specs, I think. Unless you're very specialist, you probably have all the IQ you need.
Hi Bob,

Thanks for a lot of interesting info!

Best regards

Erik
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
No, not at all...
of course it does. If nothing substantial can be had by reducing the pixel size further then it follows that a tc would offer nothing more than empty magnification at those pixel pitches.
I do not understand how you reach that conclusion. Let me oversimplify my argument.
  1. The average lens today is not ready for pixels smaller than about 3.5 microns if you wish to maintain the current level of aliasing and subjective sharpness
  2. These lenses are in the < ~$1,000 range.
  3. Lenses that take TCs are generally considerably better than average and correspondingly more expensive. No one would call a 400/2.8 or 200/2 an average lens. Not a 100-400 either.
Further, a TC will always give you more pixels on target, even if not "sharp" pixels. This helps with SNR and "smoothness" of the image.
I don't wish to maintain the current level of aliasing and subjective sharpness. I want no aliasing and all of what all of my lenses can give. All lenses from the cheapest kit zooms to expensive proffesional lenses can easely generate Moiré on 24MP APS-C and 50MP FF. I don't want it if I can be free from it.
Cool. The masses don't think the same way.
I'm not sure that it has very much to do with what 'the masses' think. How did you get your information, was there some market research done?

Rather, I think it as to do with what would be a commercially sensible path, as a camera company would see it.

Take, for example the 46MP sensor in the D850, 7D (and soon, no doubt, the Panasonic S1R). That has 4.35 micron pixels, which really aren't pushing any boundaries with respect to sensor technology. Sony's 1" sensors, which work very well, have 2.4 micron pixels. Had the sensor been made with those, it would have boasted 150MP. No problem whatsoever with the sensor, Sony can do it easily without breaking sweat. However, so far as the camera spec goes, it produces some problems. Frame rates will go down. Instead of producing a 7FPS camera, Nikon would have had a 2.3 FPS camera, which they might have felt to be less marketable. Next, in terms of marketing, give the consumer something more, they will grab it with open arms. Give them something so much more that they no longer understand it, you get sales resistance. So, whatever the masses think, they are likely to be offered what makes sense to the manufacturer, and that isn't a 150MP FF camera (yet).
I remember Eric Fossum saying he is expecting gigapixel sensors in cameras in his lifetime. So whether the masses want smaller pixels or not they will get it.
Yeah, but remember that Eric is a technical guy, not a business one (in the way that he looks at things, of course he is involved in business). Already there are commercial sensors with 0.8 micron pixels, put those on a FF, you get 1.35 GPixels, so it's already technically feasible, and with stacking you could likely even get the data off at some usable speed.
but maybe he has a grasp of what the masses want (even if they don’t know)
My thought is that the masses tend to want what is presented to them in successful marketing campaigns. Generally, if you do a bit of hindcasting with respect to what masses said they wanted a few years ago, it often turns out to be very different to what they turn out to want when it actually comes to it. Fickle people, these masses.
I understand that the IMX586 will be launched in cell phones next year
We already had 42MP in a phone (I see you referenced below)
i liked the idea put forward in the Nokia Pureview: oversample and output according to requirements
Technically, it's a very sound way of going about things.
And I am betting the masses will be pleased :)
I'm thinking that the driver has already ceased to be IQ, but there are other reasons to increase pixel counts. As in Eric's SDL pixel work, more, single-bit pixels can have some interesting and beneficial consequences downstream.
I disagree with you and AiryDiscus on this point: take a look at the dpr studio scene and see all the horrible aliasing.
I'm not disagreeing with you. I also would like to see increased pixel counts and the end of aliasing. I was speaking of a commercial driver, not a technical one. I think the verdict is in, and those masses actually like aliasing.
Yes, that point was treated by Daniel Browning in his D800 review

I've also been asked why I didn't spring for the D800E. The reason is that I strongly dislike aliasing. Most people think the only downside to removing the Optical Low-Pass Filter (OLPF) is moire, but that's only the worst manifestation of aliasing. There's also jaggies, stair-stepping, sparkling, wavy lines, bands, fringing, popping, strobing, and false detail. To me the overall look is very displeasing and has an obvious "digital-ness" that resembels a TRON-like computer world, not the natural world I see with my eyes. But I'm in the minority. Many people consider aliasing to be a desirable "crunchiness".


I look forward to the days where TCs are obsolete
Wanting TCs to be obsolete is a very esoteric wish (but one I share). I can't see much commercial headway to be made by catering for it.
 
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles,
Well, does that not count towards effective QE, if we break down effective QE by f-ratio?

Certainly the effective QE for 4.35 micron pixels at f/0.9 is greatly improved for BSI (IOW, closer to what it is at f/8 than FSI sensors)? It is approaching a one-stop difference with FSI, I think; at least 1/2 stop. DR range comparison charts do not mention that the FSI cameras typically lose DR to integer scaling at low f-ratios of f/2.5 or so and less, as well as increase input-referred noise, with or without scaling.

Of course, I would expect shallower DOF at f/0.9 with BSI, since more of that oblique light that lessens DOF is getting into the photosites.

This is something, which, unfortunately, is not considered in noise performance, DOF, and equivalence discussions often enough.
 
The former is true. The deployment of backside illumination to large sensors has little impact on quantum efficiency, some impact on acceptance angles,
Well, does that not count towards effective QE, if we break down effective QE by f-ratio?

Certainly the effective QE for 4.35 micron pixels at f/0.9 is greatly improved for BSI (IOW, closer to what it is at f/8 than FSI sensors)? It is approaching a one-stop difference with FSI, I think; at least 1/2 stop. DR range comparison charts do not mention that the FSI cameras typically lose DR to integer scaling at low f-ratios of f/2.5 or so and less, as well as increase input-referred noise, with or without scaling.

Of course, I would expect shallower DOF at f/0.9 with BSI, since more of that oblique light that lessens DOF is getting into the photosites.

This is something, which, unfortunately, is not considered in noise performance, DOF, and equivalence discussions often enough.
You're absolutely right. I was including that in 'acceptance angles'. I agree, it's a topic that warrants more attention.
 
I question if performance at f/0.9 has much relevance for FF photographers. Are there _any_ lenses for FF at larger than f/1.2 that are relevant for those looking for high sharpness/dr?

Even f/1.2 seems to be a compromise in terms of quality and aestetics (not to mention focal length, price and weight) that many photographers are unwilling to make.

-h
 
Hi,

It would be interesting to hear from folks knowledgeable about the cost aspects of modern sensor designs.

My understanding is that increasing the sensor size over proportionally expensive.

But, I would assume that increasing the number of pixels would not be very expensive.
The big dilemma is this: to put serious money into R&D to improve the current generation of sensors, or to put it into R&D aimed at newer technologies.

Incremental sensor improvements resulted in Nikon Z and Canon R underachieving with respect to image quality.

The hope is that as soon as new technologies are introduced pixel count can be addressed.
Clearly, increasing the complexity will increase the probability of component failure, but I also think that bad pixels can be mapped out. The complexity of the non pixel components should probably increase with the square root of pixel components. Doubling the number of pixels increases number of columns with sqrt(2).
The trend is to delegate more and more functions to the pixel level.
It seems that APS-C is pretty much stuck at 24 MP and 24x36 is moving slowly upwards from 36 MP.
  • There can be quite a few reasons for the slow development:
  • Optimum balance between noise and resolution may have been reached.
  • It may be that we get into diminishing returns with regard to resolution.
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
  • Cost obviously is a factor that plays an important role.
On the other hand, we get into a situation when medium format uses smaller pixel size than 24x36 mm, something like 3.8 microns on the 100 MP 44x33 mm sensor.

Any comments from sensor experts?

Best regards

Erik
 
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
I'm not a sensor expert but I do know about A/D converters. The higher the resolution (pixel count), the longer it takes for the A/D to convert all those analog pixel readings into their digital values, which affects shooting rate and power dissipation.
Would not a bsi sensor with a higher number of embedded adcs offset that (at a higher sensor complexity/power cost)?
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
Well, I think the TC has the extra selling points of a larger subject in the viewfinder, and people like to brag about the multiplied focal length, which culturally has no impressive equivalent with increased pixel density.

Many people are mesmerized by the potentially hollow accomplishment of a narrower FOV, as an end in itself, as if it were a practical "power", and don't appreciate that the same number of pixels-on-subject is optically better with pixel density than with a TC, which introduces some light scatter and aberrations of its own, even if small.

The TC is a pacifier for those of us waiting for smaller pixels.
 
The natural consequence of what you say is that teleconverters are obsolete. I yet have to see the evidence for this
Well, I think the TC has the extra selling points of a larger subject in the viewfinder, and people like to brag about the multiplied focal length, which culturally has no impressive equivalent with increased pixel density.

Many people are mesmerized by the potentially hollow accomplishment of a narrower FOV, as an end in itself, as if it were a practical "power", and don't appreciate that the same number of pixels-on-subject is optically better with pixel density than with a TC, which introduces some light scatter and aberrations of its own, even if small.

The TC is a pacifier for those of us waiting for smaller pixels.
yes! That is what I’m trying to say. With EVFs I don’t think a larger subject in the viewfinder is a selling point. You could could simply set the magnification as you wish.

I have just ordered a 2x TC in order to simulate smaller pixels :)
 
Last edited:
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
I'm not a sensor expert but I do know about A/D converters. The higher the resolution (pixel count), the longer it takes for the A/D to convert all those analog pixel readings into their digital values, which affects shooting rate and power dissipation.
Would not a bsi sensor with a higher number of embedded adcs offset that (at a higher sensor complexity/power cost)?
Multiple A/Ds in parallel would definitely fix the A/D throughput problem. But embedded in the sensor? I'm not aware of camera sensors with embedded A/Ds. I can think of a lot of circuit problems that would create and not many advantages.
 
  • It may also be that camera electronics can have difficulty to handle larger amount of data within limited power budget.
I'm not a sensor expert but I do know about A/D converters. The higher the resolution (pixel count), the longer it takes for the A/D to convert all those analog pixel readings into their digital values, which affects shooting rate and power dissipation.
Would not a bsi sensor with a higher number of embedded adcs offset that (at a higher sensor complexity/power cost)?
Multiple A/Ds in parallel would definitely fix the A/D throughput problem. But embedded in the sensor? I'm not aware of camera sensors with embedded A/Ds. I can think of a lot of circuit problems that would create and not many advantages.
Practically every sensor in every camera you buy today has embedded column ADCs. Though they weren't the originators of the technology, Sony introduced it to the still camera market with the IMX021 in the Nikon D300 and Sony A700. Every Samsung sensor used it. Panasonic used the architecture from the GH2 onwards, Canon has used it from the 1D X II and most of their cameras now use it.
 
Maybe the future:

 

Keyboard shortcuts

Back
Top