Which camera system will implement computational photography/AI in a major way?

But not everybody -- I for one do not process my photos from RAW data using programs on my computer. I use the jpegs that my camera provides me and have recently seen a big jump in the quality of these jpegs.
i to use the jpegs from both my a7iv and a6700 they are 1st class, I rarely can process a raw now to show any meanifull difference.
 
As for why. Answer one simple question, can you turn these features OFF with your phone? People who take the time and effort to use a "real" camera are doing that because they want the Control a traditional camera provides. Basically you are trying to market products to a bunch of Control Freaks who want to futz with their images with an image editor instead of relying on Auto Everything using AI in the camera.
 
none of them can. only tiny sensors are capable of fast computational capabilities. its just not possible on sensors m43, apsc, FF.
That is strange because my OMDS OM-1 has a whole menu page for computational photography.

Computational photography has been around on cameras for quite a while now, the most widespread usage being in-camera HDR.
Agreed. The poster may be referring to reports that note that due to cellphone sensors' small size they can process and offload converted image data much faster than large-format sensors of similar architecture.
While this is true, and allows cellphone imaging systems to do some impressive stuff at very low power dissipation, large-format sensors are employing different architectures to provide similar data rates. Stacked and semi-stacked sensors with multiple data channels, areal rather than edge interconnect to shorten datapaths and therefore increase data rates, and other techniques are more and more commonplace in dedicated cameras.
Unfortunately, it's a lot more expensive to build sensors and image processing systems this way, and there's no guarantee that cellphone sensors won't employ those same architectural tricks, but sales price is going to determine that.
All that aside, dedicated cameras are doing PLENTY of "computational photography" and have so for years. firmware distortion correction is one example. digital VR is another. But IMO most of ILCs' computational power is being used for high performance subject identification and tracking AF and high-quality postprocessing, including advanced video.
Recently we've seen "precapture" features that take advantage of high video capture rates to provide images selectable from a larger time period, and capture quality improvements through combining motion-compensated video frames to synthesize higher-exposure images.

Bottom line, I wouldn't go so far as to say that cellphone-style computational photography is impossible on larger format sensors...it can be done, but perhaps not at a mass-market cost. Ultimately, perhaps only a subset of the cellphone CP will be desired and implemented. Different markets.
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
You missed my point. For the same architecture. Yes, it is true that the smaller the sensel and the shorter the signal lines, the faster you can produce data and get it off the chip. No question there.

large-format sensors' answer to cellphone sensors' speediness - such that it is - is to move data more in parallel. Sensels will still be bigger, with more well capacitance, but line lengths get dramatically shorter. The penalty? expensive chip stacking to take advantage of areal contacting and those shorter datapaths. You don't really need that in cellphones and the product price can't tolerate its cost.

On the horizon and coming closer is something that is easier to do in larger format sensors than smaller format sensors - QIS, or high frame rate high resolution photon counting sensors. They've advanced rapidly in the last 5 years. These basically create a vast field of high speed sub-electron-volt noise level comparators and take frames at a very fast rate, such that the probability of a photon arriving at a sensel in any given frame is a good deal less than 1. You then combine the large time-space block of digital data into whatever exposure you want, and because it's digital you can do whatever computational photographic processing you want on it. There are already QIS and mbQIS (low-bit count fast integrating QIS) sensors out there for scientific applications, and the enabling subtechnologies to this technology are already finding their way into integrating-well sensors.

QIS screams multilevel stacked imaging chain. Could cellphones employ it as well? Sure, but they have less room to shrink the sensels and far more demanding power budgets. It will be interesting to see what happens when these out-of-the-box concepts are fully commercialized.
 
none of them can. only tiny sensors are capable of fast computational capabilities. its just not possible on sensors m43, apsc, FF.
That is strange because my OMDS OM-1 has a whole menu page for computational photography.

Computational photography has been around on cameras for quite a while now, the most widespread usage being in-camera HDR.
Agreed. The poster may be referring to reports that note that due to cellphone sensors' small size they can process and offload converted image data much faster than large-format sensors of similar architecture.
While this is true, and allows cellphone imaging systems to do some impressive stuff at very low power dissipation, large-format sensors are employing different architectures to provide similar data rates. Stacked and semi-stacked sensors with multiple data channels, areal rather than edge interconnect to shorten datapaths and therefore increase data rates, and other techniques are more and more commonplace in dedicated cameras.
Unfortunately, it's a lot more expensive to build sensors and image processing systems this way, and there's no guarantee that cellphone sensors won't employ those same architectural tricks, but sales price is going to determine that.
All that aside, dedicated cameras are doing PLENTY of "computational photography" and have so for years. firmware distortion correction is one example. digital VR is another. But IMO most of ILCs' computational power is being used for high performance subject identification and tracking AF and high-quality postprocessing, including advanced video.
Recently we've seen "precapture" features that take advantage of high video capture rates to provide images selectable from a larger time period, and capture quality improvements through combining motion-compensated video frames to synthesize higher-exposure images.

Bottom line, I wouldn't go so far as to say that cellphone-style computational photography is impossible on larger format sensors...it can be done, but perhaps not at a mass-market cost. Ultimately, perhaps only a subset of the cellphone CP will be desired and implemented. Different markets.
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
You missed my point. For the same architecture. Yes, it is true that the smaller the sensel and the shorter the signal lines, the faster you can produce data and get it off the chip. No question there.
large-format sensors' answer to cellphone sensors' speediness - such that it is - is to move data more in parallel. Sensels will still be bigger, with more well capacitance, but line lengths get dramatically shorter. The penalty? expensive chip stacking to take advantage of areal contacting and those shorter datapaths. You don't really need that in cellphones and the product price can't tolerate its cost.
On the horizon and coming closer is something that is easier to do in larger format sensors than smaller format sensors - QIS, or high frame rate high resolution photon counting sensors. They've advanced rapidly in the last 5 years. These basically create a vast field of high speed sub-electron-volt noise level comparators and take frames at a very fast rate, such that the probability of a photon arriving at a sensel in any given frame is a good deal less than 1. You then combine the large time-space block of digital data into whatever exposure you want, and because it's digital you can do whatever computational photographic processing you want on it. There are already QIS and mbQIS (low-bit count fast integrating QIS) sensors out there for scientific applications, and the enabling subtechnologies to this technology are already finding their way into integrating-well sensors.
QIS screams multilevel stacked imaging chain. Could cellphones employ it as well? Sure, but they have less room to shrink the sensels and far more demanding power budgets. It will be interesting to see what happens when these out-of-the-box concepts are fully commercialized.
im currently stacking 100 to 400 images for my extreme macro, its never going to happen without a dedicated computer and program like zerene. the process takes 20 min atm cant see it ever taking 1 sec in camera 😁🤨 i do like that my a6700 groups the burst into a simple folder for reviewing the images.



 
As for why. Answer one simple question, can you turn these features OFF with your phone? People who take the time and effort to use a "real" camera are doing that because they want the Control a traditional camera provides. Basically you are trying to market products to a bunch of Control Freaks who want to futz with their images with an image editor instead of relying on Auto Everything using AI in the camera.
I don't think that's what OP is asking. But, when it comes to turning off some of the features, it depends on the phone and the app. There is a whole lot of people using Gcam mod on Android phones, tweaking stuff, building configs to get the last bit of information from those tiny sensors.

More in line with OP's pondering, I'd like my 'real' camera could take night handheld photos as my phones can. In reality, m43 sensor is close to 1" used in flagship phones and uses less pixels, making the job for CPU 'easier'. I just think, like Samsung 200MP sensor is a lot of (not so perfect) data, it thinks a second and come with usable photo.... Or the 'proper' camera sensors to move in QuadBayer technology and gain the processing power - but not constrained with space and control design limitations like in phones. Reckon it's would be easy to switch off some of the stuff or not use it in case not one's cup of tea (I don't do video for example).

I guess the answer is in R&D money pool....
 
I think you mean tech company; the camera will be the least important part of the process, and eventually not involved at all.
 
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
You missed my point. For the same architecture. Yes, it is true that the smaller the sensel and the shorter the signal lines, the faster you can produce data and get it off the chip. No question there.
large-format sensors' answer to cellphone sensors' speediness - such that it is - is to move data more in parallel. Sensels will still be bigger, with more well capacitance, but line lengths get dramatically shorter. The penalty? expensive chip stacking to take advantage of areal contacting and those shorter datapaths. You don't really need that in cellphones and the product price can't tolerate its cost.
On the horizon and coming closer is something that is easier to do in larger format sensors than smaller format sensors - QIS, or high frame rate high resolution photon counting sensors. They've advanced rapidly in the last 5 years. These basically create a vast field of high speed sub-electron-volt noise level comparators and take frames at a very fast rate, such that the probability of a photon arriving at a sensel in any given frame is a good deal less than 1. You then combine the large time-space block of digital data into whatever exposure you want, and because it's digital you can do whatever computational photographic processing you want on it. There are already QIS and mbQIS (low-bit count fast integrating QIS) sensors out there for scientific applications, and the enabling subtechnologies to this technology are already finding their way into integrating-well sensors.
QIS screams multilevel stacked imaging chain. Could cellphones employ it as well? Sure, but they have less room to shrink the sensels and far more demanding power budgets. It will be interesting to see what happens when these out-of-the-box concepts are fully commercialized.
im currently stacking 100 to 400 images for my extreme macro, its never going to happen without a dedicated computer and program like zerene. the process takes 20 min atm cant see it ever taking 1 sec in camera 😁🤨 i do like that my a6700 groups the burst into a simple folder for reviewing the images.

Great image. For your focus-stacking images, physical optical elements have to be moved, so the process is inherently fairly slow. But for applications where nothing physical inside the camera is moving during the capture, QIS will work marvelously.
When QIS says "fast frame rate", we are talking thousands of times a second. Each frame comprises a hundred MPX or more. Pixels as small as 0.5um. 1.0um is typical for cellphones. 4um for a 60MP FF sensor. There is no time required for conversion - each pixel is either zero (no photon arrived) or 1(a photon arrived). The limit on frame rate is the rate at which the output of the comparators can be read and shipped to the next processing element. Nothing of the sort we're accustomed to with today's cameras.
You're right that a dedicated computer is absolutely critical - but that's what the GPU inside each of our cameras already is. Photon counting sensors use exactly the same type of processor, and because it's dedicated it can be optimized, far more than a general purpose GPU can, and much more so than a general purpose CPU. The limit on performance, primarily, would be the effect of heat generated by the computational layer of a stacked sensor on the noise level of the photon sensing layer and the power demands of the stack.
It's tricky to extrapolate what is possible with a radically different architecture from results obtained with a standard integrating-well sensor equipped ILC and a PC.
The next 5 years in imaging will be rather fascinating to experience.
 
Last edited:
I don't know. What I do know is:
  • This kind of question has been asked in this same forum over and over for a decade or longer.
  • Many dedicated cameras - some going back more than 15 years - already have computational photography/AI capability.
  • There is no clear definition of 'a major way'.
 
Last edited:
Would someone please explain computational photography to me? What does it do?

My biggest problems are reflexes and uncooperative subjects. I don't react quickly enough to get the picture I wanted. The baby would rather explore grandpa's beard than look at the camera. The lens I need is too heavy and/or expensive to reach the bird. I don't see how "computational" would help.
CP is a very broad term. It can range from enhancing image quality through motion-compensated combination of burst (i.e., video) frame sequences, to local area exposure setting, to enhanced optical effects, to automatic scene element editing or outright deletion and fill...the list goes on and no aspect of the photographic process is untouched.
In your case, "precapture", which is essentially treating the still camera like a video camera and gathering an extended sequence of frames that begins before you click the shutter. Some cameras will then qualitatively select a "best" frame for you.
Turning a 100mm lens into a 1200mm monster is a tougher trick. New AI programs can upresolve images convincingly, but they are creating data. On the other hand, a 100MP sensor behind that 100mm lens with burst-sequence frame combination and upresolution might do a surprisingly convincing emulation of that 1200mm lens in front of a 24MP sensor.
But you couldn't afford it. For now.
Thanks, that at least gives me a frame for following the discussion.
 
Computational features are inevitable. In-camera pano stitching, high-res multishot, pre-buffering, focus stacking, HDR stacking, long exposure stacking, they all exist already in some cameras.

AI is a different beast, however. Currently, machine learning techniques are employed for AF, denoising, upscaling, masking, generative fill. Except for AF, these are useful in post processing, but I don't see how they'd help in camera.
 
Computational features are inevitable. In-camera pano stitching, high-res multishot, pre-buffering, focus stacking, HDR stacking, long exposure stacking, they all exist already in some cameras.

AI is a different beast, however. Currently, machine learning techniques are employed for AF, denoising, upscaling, masking, generative fill. Except for AF, these are useful in post processing, but I don't see how they'd help in camera.
Canon and Sony both have AI processors in their top end cameras. If you're talking about in-camera AI that learns in the field, it takes too much HP and training set size to do that.
 
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
You missed my point. For the same architecture. Yes, it is true that the smaller the sensel and the shorter the signal lines, the faster you can produce data and get it off the chip. No question there.
large-format sensors' answer to cellphone sensors' speediness - such that it is - is to move data more in parallel. Sensels will still be bigger, with more well capacitance, but line lengths get dramatically shorter. The penalty? expensive chip stacking to take advantage of areal contacting and those shorter datapaths. You don't really need that in cellphones and the product price can't tolerate its cost.
On the horizon and coming closer is something that is easier to do in larger format sensors than smaller format sensors - QIS, or high frame rate high resolution photon counting sensors. They've advanced rapidly in the last 5 years. These basically create a vast field of high speed sub-electron-volt noise level comparators and take frames at a very fast rate, such that the probability of a photon arriving at a sensel in any given frame is a good deal less than 1. You then combine the large time-space block of digital data into whatever exposure you want, and because it's digital you can do whatever computational photographic processing you want on it. There are already QIS and mbQIS (low-bit count fast integrating QIS) sensors out there for scientific applications, and the enabling subtechnologies to this technology are already finding their way into integrating-well sensors.
QIS screams multilevel stacked imaging chain. Could cellphones employ it as well? Sure, but they have less room to shrink the sensels and far more demanding power budgets. It will be interesting to see what happens when these out-of-the-box concepts are fully commercialized.
im currently stacking 100 to 400 images for my extreme macro, its never going to happen without a dedicated computer and program like zerene. the process takes 20 min atm cant see it ever taking 1 sec in camera 😁🤨 i do like that my a6700 groups the burst into a simple folder for reviewing the images.

Great image. For your focus-stacking images, physical optical elements have to be moved, so the process is inherently fairly slow. But for applications where nothing physical inside the camera is moving during the capture, QIS will work marvelously.
When QIS says "fast frame rate", we are talking thousands of times a second. Each frame comprises a hundred MPX or more. Pixels as small as 0.5um. 1.0um is typical for cellphones. 4um for a 60MP FF sensor. There is no time required for conversion - each pixel is either zero (no photon arrived) or 1(a photon arrived). The limit on frame rate is the rate at which the output of the comparators can be read and shipped to the next processing element. Nothing of the sort we're accustomed to with today's cameras.
You're right that a dedicated computer is absolutely critical - but that's what the GPU inside each of our cameras already is. Photon counting sensors use exactly the same type of processor, and because it's dedicated it can be optimized, far more than a general purpose GPU can, and much more so than a general purpose CPU. The limit on performance, primarily, would be the effect of heat generated by the computational layer of a stacked sensor on the noise level of the photon sensing layer and the power demands of the stack.
It's tricky to extrapolate what is possible with a radically different architecture from results obtained with a standard integrating-well sensor equipped ILC and a PC.
The next 5 years in imaging will be rather fascinating to experience.
we will see. im using microscope objectives and shooting bursts atm, I looked at the a93 and even its stacked sensor is not capable of what im shooting which is live spiders. the a93 can only shoot 160 images before the buffer is full and takes 11 seconds to clear, im shooting 11 frames per second now with instant clearing on both my a7iv and a6700. also i need large pixels 5um optimum for the best result as even shooting at iso100 you can clearly see the difference not only from a noise perspective but diffraction as well, so far the a7iv has produced the cleanest images of any camera i have used. i havnt yet used dedicated NR programs as im perfecting the process of actually capturing live subjects which im pretty much at the forefront shooting up to 15x mag atm and at good working distances of 16mm.
 
Do you think we'll ever see this sort of image processing in our cameras?
Probably not. The 3 main things dedicated cameras have over smartphones is versatility, ergonomics and control over the final results.

--
Tom
 
Last edited:
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
There comes a point where both are so fast that the difference in speed doesn't matter. As a point of reference, a car that can do 0 to 60 in 3 seconds is technically faster than one that can do it in 3.1 seconds but from the driver's point of view there is no difference.
 
good post, but unless your going to change the speed of electricity and the properties of silicon its not possable to match the speed of a cell phone sized sensor.
You missed my point. For the same architecture. Yes, it is true that the smaller the sensel and the shorter the signal lines, the faster you can produce data and get it off the chip. No question there.
large-format sensors' answer to cellphone sensors' speediness - such that it is - is to move data more in parallel. Sensels will still be bigger, with more well capacitance, but line lengths get dramatically shorter. The penalty? expensive chip stacking to take advantage of areal contacting and those shorter datapaths. You don't really need that in cellphones and the product price can't tolerate its cost.
On the horizon and coming closer is something that is easier to do in larger format sensors than smaller format sensors - QIS, or high frame rate high resolution photon counting sensors. They've advanced rapidly in the last 5 years. These basically create a vast field of high speed sub-electron-volt noise level comparators and take frames at a very fast rate, such that the probability of a photon arriving at a sensel in any given frame is a good deal less than 1. You then combine the large time-space block of digital data into whatever exposure you want, and because it's digital you can do whatever computational photographic processing you want on it. There are already QIS and mbQIS (low-bit count fast integrating QIS) sensors out there for scientific applications, and the enabling subtechnologies to this technology are already finding their way into integrating-well sensors.
QIS screams multilevel stacked imaging chain. Could cellphones employ it as well? Sure, but they have less room to shrink the sensels and far more demanding power budgets. It will be interesting to see what happens when these out-of-the-box concepts are fully commercialized.
im currently stacking 100 to 400 images for my extreme macro, its never going to happen without a dedicated computer and program like zerene. the process takes 20 min atm cant see it ever taking 1 sec in camera 😁🤨 i do like that my a6700 groups the burst into a simple folder for reviewing the images.

Great image. For your focus-stacking images, physical optical elements have to be moved, so the process is inherently fairly slow. But for applications where nothing physical inside the camera is moving during the capture, QIS will work marvelously.
When QIS says "fast frame rate", we are talking thousands of times a second. Each frame comprises a hundred MPX or more. Pixels as small as 0.5um. 1.0um is typical for cellphones. 4um for a 60MP FF sensor. There is no time required for conversion - each pixel is either zero (no photon arrived) or 1(a photon arrived). The limit on frame rate is the rate at which the output of the comparators can be read and shipped to the next processing element. Nothing of the sort we're accustomed to with today's cameras.
You're right that a dedicated computer is absolutely critical - but that's what the GPU inside each of our cameras already is. Photon counting sensors use exactly the same type of processor, and because it's dedicated it can be optimized, far more than a general purpose GPU can, and much more so than a general purpose CPU. The limit on performance, primarily, would be the effect of heat generated by the computational layer of a stacked sensor on the noise level of the photon sensing layer and the power demands of the stack.
It's tricky to extrapolate what is possible with a radically different architecture from results obtained with a standard integrating-well sensor equipped ILC and a PC.
The next 5 years in imaging will be rather fascinating to experience.
we will see. im using microscope objectives and shooting bursts atm, I looked at the a93 and even its stacked sensor is not capable of what im shooting which is live spiders. the a93 can only shoot 160 images before the buffer is full and takes 11 seconds to clear, im shooting 11 frames per second now with instant clearing on both my a7iv and a6700. also i need large pixels 5um optimum for the best result as even shooting at iso100 you can clearly see the difference not only from a noise perspective but diffraction as well, so far the a7iv has produced the cleanest images of any camera i have used. i havnt yet used dedicated NR programs as im perfecting the process of actually capturing live subjects which im pretty much at the forefront shooting up to 15x mag atm and at good working distances of 16mm.
Just remember: all the things that you're talking about - large pixel size, buffer sizes and burst speeds, etc. are aspects of today's integrating-well sensored cameras. If and when QIS and QIS-like architectures take hold, the performance characteristics of an image become, in the main, determined in software that combines the time-space block of data into a "snapshot in time" equivalent. Computing and storage demands increase dramatically, but you can stuff a lot of that into a camera body and of course you will still off-load focus stacking combination loads into a separate computer system. I'm concentrating mostly on how photon counting sensor architectures will affect more traditional single-shot photography. You're working in a very specialized area in which photon counting will give each frame better and post-adjustable IQ but that because of the mechanical changes between the frames is inherently slow.
 
If such a camera was to ever be built the chances of winning photographic competitions or being awarded prizes for you work will be greatly diminished as the authenticity of the photographs will always be seen with suspicion that their content has been altered.

--
A camera is just a camera. Who is behind it, matters far more.
 
Last edited:
If such a camera was to ever be built the chances of winning photographic competitions or being awarded prizes for you work will be greatly diminished as the authenticity of the photographs will always be seen with suspicion that their content has been altered.
its already happening now, im not prepared to show my setup as its taken 7 year to perfect. the way i look at it ,no pain no gain. 😊 look at the image i posted not 1 like for a world class image.
 
Last edited:
Olympus has been leading the pack with computational photography (Live View HIghlight/Shadow, Live Composite, Live Time, Focus Stacking, HDR, High Resolution, Live ND, Live GND, Subject Detection, etc.).

Features that require interaction with the sensor will continue to be added to the camera.

For post processing, whichever camera system produces a seamless and fast interface to a smartphone. The smartphone is already architected for installed applications and interface to cloud applications. This is how to bring significant processing in post-processing.
 
Olympus has been leading the pack with computational photography (Live View HIghlight/Shadow, Live Composite, Live Time, Focus Stacking, HDR, High Resolution, Live ND, Live GND, Subject Detection, etc.).
can you post some images you have taken with all these features. the last time i posted a thread on the subject no one could even post a live composite image.
Features that require interaction with the sensor will continue to be added to the camera.

For post processing, whichever camera system produces a seamless and fast interface to a smartphone. The smartphone is already architected for installed applications and interface to cloud applications. This is how to bring significant processing in post-processing.
 
That computational features will be more and more incorporated into dedicated cameras is for sure. At the moment this happens at a glacial page but as advanced chips become cheaper, less power hungry, and better batteries emerge, this will speed up. For me, the less I have to focus on the technical parts of photography & video. the more I can focus on the story I want to capture. There will of course be some detours when the implemented features work against our best interest, but with the persistent pace of capitalism, the surviving solutions tend to be what people want.

There will also remain more old fashioned way of capturing images that will cater to a diminishing fraction of people, although a vocal group ;) But it is good that different choices will be offered for everyone to find their tools of choice. What I would have wished is for people here to focus less on attacking what they don't want, and instead focus on what they like working with.
After having been at DPReview for 20+ years I find that people are so resisting change, instead of feeling excitement or at least refrain from spreading negative vibes.
 

Keyboard shortcuts

Back
Top