Where are the sensor arrays?

Shlomo Goldwasser

Active member
Messages
98
Reaction score
56
A full frame camera is much more expensive than a comparable aps-c one. Maybe I am missing something. What is there to prevent a manufacturer from using an array of smaller sensors to cover the same area as a full frame? I know current sensors are not rimless, so placing them next to each other would be problematic. But then again, televisions used to all have rims but nowadays you have 4k monitors that are made of two smaller screens seemlessly placed side-by-side. Can't the same be realized for sensors?

Obviously I am missing something as that would make even a true medium format sized sensor array dirt cheap.
 
nowadays you have 4k monitors that are made of two smaller screens seemlessly placed side-by-side. Can't the same be realized for sensors?
no, the pixel pitch of a sensor is far too small for it to be commercially feasible, because its on the order of a few microns, and any minor misalignment is quite visible. The pixel pitch of a monitor is a couple hundred microns, and minor misalignments are not nearly as visible given the typical viewing distance.
 
Thanks for your response.

I would have thought that with proper tooling a sensor array could be aligned precise enough for this.
 
A full frame camera is much more expensive than a comparable aps-c one. Maybe I am missing something. What is there to prevent a manufacturer from using an array of smaller sensors to cover the same area as a full frame? I know current sensors are not rimless, so placing them next to each other would be problematic. But then again, televisions used to all have rims but nowadays you have 4k monitors that are made of two smaller screens seemlessly placed side-by-side. Can't the same be realized for sensors?
I'm not aware of any 4k screens that are side-by-side composites. It would seem to be rather unnecessary.
Obviously I am missing something as that would make even a true medium format sized sensor array dirt cheap.
You are probably unaware that sensors have photosensitive sites outside the nominal area in order for the demosaicing procedure to be implemented. This would make a composite array unworkable.

For example, the typical 6000x4000 Nikon sensor has a full complement of photosensitive sites that total 24.3MPix.
 
A full frame camera is much more expensive than a comparable aps-c one. Maybe I am missing something. What is there to prevent a manufacturer from using an array of smaller sensors to cover the same area as a full frame? I know current sensors are not rimless, so placing them next to each other would be problematic. But then again, televisions used to all have rims but nowadays you have 4k monitors that are made of two smaller screens seemlessly placed side-by-side. Can't the same be realized for sensors?

Obviously I am missing something as that would make even a true medium format sized sensor array dirt cheap.
But why if you can produce a FF/DX/APS just as cheap as an array of, (even if smaller) sensors ???
 
Thanks for your response.

I would have thought that with proper tooling a sensor array could be aligned precise enough for this.
even if it were feasible from a mechanical alignment point of view, sensors are like any other chip, they tend to have I/O pads around their perimeter, with the active logic in the interior portion. So they're not inherently suited to butting.

I have no idea what the fabrication issues would be if instead of leaving an I/O ring around the active light sensing pixel area, you instead fabricated it with active pixels up to the very edge of the device on one side. I suspect the pixels at the edge would be prone to fabrication problems. In addition, I'm sure the die cutting technology could not cut the edge precisely enough to make a clean cut so they could be butted together, and would destroy all the pixels on the edge during the cutting process. Typically the die are spaced on the wafer with about 1mm between them for cutting, so if you want to cut to .001mm (1 micron) accuracy, good luck with that.

Then consider all the design changes required to make a 3 sided I/O layout instead of the typical 4 sided one that leverages legacy designs instead of starting from scratch each time. After a bit of thought, the whole idea starts to seem rather implausible, doesn't it?
 
  • Like
Reactions: kli
First, I think the assembly cost would exceed the savings. But even if some small cost savings is possible, it’s minimal, because there are many things which make full frame cameras more expensive:

The shutter is larger, the mirror box is larger, the image processors must all be faster. The body itself is larger. The AF mechanism should ideally be larger. And because the lenses are much more expensive, unit volume is lower, meaning scaling the price down is more difficult. So there are many things which make FF cameras cost more, not only the sensor.
 
I'm glad question was asked. Too bad. We could have other configurations available. If only. In my imagination was a cross shape sensor group that would let you choice vertical or horizontal photo. Maybe we can have one now, if mirror box could allow a sensor that is rotated 90 degrees. Or, easier to do in a mirrorless?! Yes, moving senors.
 
Last edited:
Well, camera arrays already exit, I could not find the real world experience also there was an article about it some month ago, only this paper:


Not exactly what you ask for but having the lens part of the arry solve one of the large sensor issue, size of individual pieces of glass.
 

Keyboard shortcuts

Back
Top