What are the current possibilities of computational photography + FF cameras?

... a camera say the size of a Fuji X-T30 and their 35 1.4 but with the ability to duplicate FF up to 200mm
Is there a camera that can make a 50mm (equivalent) lens produce the same magnification and same resolution as a 200mm (equivalent ) lens using CP alone, without relying on more than one lens to do it ... or are you asking for an X-T30 size camera with multiple lenses and sensors used simultaneously?
"a camera that can make a 50mm (equivalent) lens produce the same magnification and same resolution as a 200mm (equivalent ) lens using CP alone, without relying on more than one lens to do it"
If that's what you want ... then no, you won't be getting it anytime soon in any form factor.
But we get it in smart cameras. Huawei's flagships are doing this so why can't a higher quality and larger sensor with a higher quality lens do this but exponentially better using the same approach?
Ignoring the skills required in-company, the other considerations are whether the larger camera has sufficient processing power, and perhaps the most important hardware issue is whether a large sensor can readout quickly enough to auto stack without much camera movement. The Sony A9 might be the only FF camera with a fast enough readout to say take 10 exposure in what is apparently an instantaneous amount of time (to the photographer).
 
... a camera say the size of a Fuji X-T30 and their 35 1.4 but with the ability to duplicate FF up to 200mm
Is there a camera that can make a 50mm (equivalent) lens produce the same magnification and same resolution as a 200mm (equivalent ) lens using CP alone, without relying on more than one lens to do it ... or are you asking for an X-T30 size camera with multiple lenses and sensors used simultaneously?
"a camera that can make a 50mm (equivalent) lens produce the same magnification and same resolution as a 200mm (equivalent ) lens using CP alone, without relying on more than one lens to do it"
If that's what you want ... then no, you won't be getting it anytime soon in any form factor.
But we get it in smart cameras. Huawei's flagships are doing this
Which Huawei flagships do that? Not the P30 Pro (unless DPR's report on it is wrong):

"The P30 Pro offers a 125mm equivalent periscope-style tele lens and uses image fusion and other computational methods for seamless zooming between the 16mm equivalent focal length of the camera's super-wide-angle module, the 27mm primary module and the tele."

The phone has three camera modules plus a time-of-flight laser sensor.

Even despite that, the P30 Pro 5x zoom IQ sucks. It definitely does not maintain the same usable resolution as the normal lens. Let's not even discuss the miserable 10x zoom IQ.
so why can't a higher quality and larger sensor with a higher quality lens do this but exponentially better using the same approach?
Again, which single-lens, single-sensor smartphones are you talking about that can do exactly what I described?
 
Last edited:
To be honest, aside from better multiple exposure blending, I don't think computational photography can do much for larger cameras, because:
  1. Real depth of field and bokeh is already much higher quality with lenses for larger sensors
  2. Resolution on larger format cameras, even on micro four thirds, is already good enough for most people
  3. Night-sight/fake lighting (as in some portrait mode features on phones) is rather ugly, and it seems antithetical to photography, whose sole purpose is to chase great light
There's a lot of hype around computational photography because it has enabled small-sensor phones to get shots that were not possible before and which are in a very narrow range of the capabilities of a dedicated camera, but for larger cameras the possibilities are quite underwhelming.

Also, it is true that there is always some manipulation going on when we take pictures, but in my mind the level that machine-learning might one day make possible is outside the realm of photography.

I do think there are advancements to be made, such as more adept autofocus especially with regard to action and wildlife, but I don't consider that computational photography.
 
To be honest, aside from better multiple exposure blending, I don't think computational photography can do much for larger cameras, because:
  1. Real depth of field and bokeh is already much higher quality with lenses for larger sensors
But I see this being surpassed in the near future. One thing to keep in mind is that the ILC is not actually capturing a true 3D rendering of the scene whereas the smartphone is actually 3D mapping the scene so it's just a matter of time until someone perfects the rendering aspect. You will then have total control of DoF from any distance or angle without the need for changing out lenses.
  1. Resolution on larger format cameras, even on micro four thirds, is already good enough for most people
  1. Night-sight/fake lighting (as in some portrait mode features on phones) is rather ugly, and it seems antithetical to photography, whose sole purpose is to chase great light
The main purpose of photography is to capture a moment. In the distant past lighting was an obstacle due to a lack of technology but we can now manipulate and create light in order to recreate the moment on the medium of choice.
There's a lot of hype around computational photography because it has enabled small-sensor phones to get shots that were not possible before and which are in a very narrow range of the capabilities of a dedicated camera, but for larger cameras the possibilities are quite underwhelming.
To be fair, comp. photography is playing a major role in the innovations we are seeing with ILCs as well, they are just holding back because it will destroy the outdated philosophy and business model that they have. its like the internal combustion engine, an outdated tech that only still survives due to a monopoly. Tesla is trying to break through that.
Also, it is true that there is always some manipulation going on when we take pictures, but in my mind the level that machine-learning might one day make possible is outside the realm of photography.
The same was said when we went from film to digital. I think there is a whole other level of photography that will open up just like it has whenever a new tech emerges.
I do think there are advancements to be made, such as more adept autofocus especially with regard to action and wildlife, but I don't consider that computational photography.
I feel like this contradicts your overall reasoning. You're saying that photography is ruined by automation but isn't that what AF is? I mean Sony's current AF tech has more or less taken the key skill out of wildllife photography.
 
But I see this being surpassed in the near future. One thing to keep in mind is that the ILC is not actually capturing a true 3D rendering of the scene whereas the smartphone is actually 3D mapping the scene so it's just a matter of time until someone perfects the rendering aspect. You will then have total control of DoF from any distance or angle without the need for changing out lenses.
You mean the Lytro Light Field camera? It's dead.

https://www.bhphotovideo.com/c/product/1046808-REG/lytro_illum_light_field_digital.html

I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
 
Last edited:
But I see this being surpassed in the near future. One thing to keep in mind is that the ILC is not actually capturing a true 3D rendering of the scene whereas the smartphone is actually 3D mapping the scene so it's just a matter of time until someone perfects the rendering aspect. You will then have total control of DoF from any distance or angle without the need for changing out lenses.
You mean the Lytro Light Field camera? It's dead.

https://www.bhphotovideo.com/c/product/1046808-REG/lytro_illum_light_field_digital.html

I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
The iPhone does this with the front facing camera using infrared. It was a technology designed for AR but is also being used for portraiture.
 
I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
The iPhone does this with the front facing camera using infrared. It was a technology designed for AR but is also being used for portraiture.
I think you're confused. The IR camera is short-range, only to recognize the face/head right in front of it. I don't think it can scan the whole room, not to mention open-space ...
 
I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
The iPhone does this with the front facing camera using infrared. It was a technology designed for AR but is also being used for portraiture.
I think you're confused. The IR camera is short-range, only to recognize the face/head right in front of it. I don't think it can scan the whole room, not to mention open-space ...
It 3D maps the person and the actual camera provides the depth of the scene around them.

The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
 
It 3D maps the person and the actual camera provides the depth of the scene around them.
that just separating the close-up person from the scene. What about the chair or the table behind the person, or the vase of flowers on the table a little further back, just blur them all the same amount?
The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
I totally forgot that we've been focusing on just one use-case, and most smartphones are mainly for posting selfies on social media anyway.

ToF sensor is nothing new, the resolution of the sensor won't be as high as the main camera, so mapping the out-of-focus will not be pixel-accurate. And then, there's nothing preventing camera makers from adding ToF sensor to camera, especially ToF can help AF in very low light. Perhaps the collected depth map can also be saved as a separate file or embedded in the RAW for other tasks/effects (portrait mode!). They've added Bluetooth, Wifi, GPS, ... slowly to cameras over the years, so I do expect more technology to be added if they prove to be useful.
 
It 3D maps the person and the actual camera provides the depth of the scene around them.
that just separating the close-up person from the scene. What about the chair or the table behind the person, or the vase of flowers on the table a little further back, just blur them all the same amount?
The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
I totally forgot that we've been focusing on just one use-case, and most smartphones are mainly for posting selfies on social media anyway.
"I see your true colors shining through."
ToF sensor is nothing new, the resolution of the sensor won't be as high as the main camera, so mapping the out-of-focus will not be pixel-accurate.
And using glass and metal to make lenses is also nothing new but oh, every year they improve. Amazing how innovative we humans are.
And then, there's nothing preventing camera makers from adding ToF sensor to camera, especially ToF can help AF in very low light. Perhaps the collected depth map can also be saved as a separate file or embedded in the RAW for other tasks/effects (portrait mode!). They've added Bluetooth, Wifi, GPS, ... slowly to cameras over the years, so I do expect more technology to be added if they prove to be useful.
Yay, you finally decided to rejoin the actual discussion and stop making this a smartphone vs ILC debate. Thank you.

Someone else mentioned that the reason why we don't see these innovations in ILCs is because it would basically be seen as them throwing in the towel. Like if Apple decided to create an open system. I think they were right.
 
I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
The iPhone does this with the front facing camera using infrared. It was a technology designed for AR but is also being used for portraiture.
I think you're confused. The IR camera is short-range, only to recognize the face/head right in front of it. I don't think it can scan the whole room, not to mention open-space ...
It 3D maps the person and the actual camera provides the depth of the scene around them.

The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
Can be done with two pics too. Actually, has been done for over 100 years. But it is always very inaccurate. You cannot escape that in 1/250 of time, there's no way to measure some additional thing without a worst measure of something else. To do both you need more time (or a larger sensor)...unfortunately, most things move, including your own arm.
 
I don't even know what you're talking about "3D rendering" or "3D mapping". Currently a way to actually map an object to 3D model is to use a 3D laser scanner, or take multiple photos from different angles. I don't know if you can do that with an open-space scene yet ...


Do you have a reference of what you're talking about?
The iPhone does this with the front facing camera using infrared. It was a technology designed for AR but is also being used for portraiture.
I think you're confused. The IR camera is short-range, only to recognize the face/head right in front of it. I don't think it can scan the whole room, not to mention open-space ...
It 3D maps the person and the actual camera provides the depth of the scene around them.

The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
Can be done with two pics too. Actually, has been done for over 100 years. But it is always very inaccurate. You cannot escape that in 1/250 of time, there's no way to measure some additional thing without a worst measure of something else. To do both you need more time (or a larger sensor)...unfortunately, most things move, including your own arm.
I can get to work by walking or I can just use Skype and google suite and just stay home. You know what else moves? Innovation.
 
It 3D maps the person and the actual camera provides the depth of the scene around them.

The 2020 models are rumored to have a time of flight sensor which will send out a pulse to get actual depth measurements of the entire scene. Hauwei P30 Pro already has a ToF sensor which is why their portrait mode is one of the most accurate to date.
Can be done with two pics too. Actually, has been done for over 100 years. But it is always very inaccurate. You cannot escape that in 1/250 of time, there's no way to measure some additional thing without a worst measure of something else. To do both you need more time (or a larger sensor)...unfortunately, most things move, including your own arm.
I can get to work by walking or I can just use Skype and google suite and just stay home. You know what else moves? Innovation.
This is what I am saying. ToF exists since the time the original Star Wars was released. Stereoscopic over 100 years. Innovations of long ago. However, a mobile camera ToF, given the size and power constraints, has a resolution of about 300x200 if lucky. Can you increase it? Sure...you will need the sapce that a kind of FF camera may provide. The tiny sensor for Lidar on a phone will always be crappy because it still has a tiny sensor and is still trying to infer where light is coming from. You basically have the same problem...the tiny sensor.

However, the innovations are other uses of Lidar: authentication, object recognition, self-driving, gestured interfaces, and so many others.

On the other hand, ToF is a kind of camera in itself. Instead of color and light intensity at x:y positions, it cares about depth at x:y positions. It still needs a sensor. And a tiny phone will always have a tiny ToF sensor and amount of light it can emit.

Physics is so stubborn...can't it realize the faith of the small-minded faketographers?
 
Last edited:
With what we are seeing being done with smart cameras and computational photography, is it possible for a camera manuf. to create a camera say the size of a Fuji X-T30 and their 35 1.4 but with the ability to duplicate FF up to 200mm while also giving DoF control and low light of a FF 50 1.4?
DOF control: my pictures are from this app so you can try out other values if you like.

To compare DOF control you must use comparable pictures. Here I've assumed a head and shoulders portrait. I set the app to maintain the same field of view to ensure comparability.

Top is FF 50mm f/1.4, bottom FF 200mm f/2 (the values you are asking about). The same FOV needs different subject distances for the two sensor sizes and focal lengths. DOF is shown at the bottom (I've ringed in blue). The values are 2.0cm for the 50mm version, 2.4cm for 200mm. That's without any computational photography, just the natural results from the two lenses. The natural control of 50/1.4 is already greater than 200/2 - what would computational photography add?

958905f71e664da1bb3a165b507ab2f7.jpg



Simulating 200mm: this is in effect just cropping, so without resort to computation one can just crop the basic image. Going from 75mm (the effective FL of 50mm on APS-C) to 200mm is a factor of 0.375 linearly, 0.14 by area, which would yield 3.65MP from the X-T30 sensor.

This is a roughly comparable crop from a much older, lower MP camera yielding 1.35MP.

What do you think computational photography would add?

4c308ac9600148f4a83b1641e8b3309b.jpg

ISO50: the X-T30 has a minimum ISO of 80 and a two-shot multi-exposure mode so it can already give ISO40 without any computation beyond what the camera already has. And, as I've said elsewhere, many cameras can already blend a lot more than two shots.

So the answer to your basic question is that computational photography of the kind used in phones can't even match what the cameras do naturally, let alone improve on it.

--
---
Gerry
___________________________________________
First camera 1953, first Pentax 1985, first DSLR 2006
[email protected]
 
I feel like this contradicts your overall reasoning. You're saying that photography is ruined by automation but isn't that what AF is? I mean Sony's current AF tech has more or less taken the key skill out of wildllife photography.
I never once said photography is ruined by automation. You said that. I just said I don't think large-sensor cameras can make significant gains with computational photography, which I meant the subset of automation that is using post-capture processing to produce an image. AF tech, which is precapture, is an entirely different story in my opinion, and even there the latest AF systems on flagship cameras are already pretty good IMO.
 
I feel like this contradicts your overall reasoning. You're saying that photography is ruined by automation but isn't that what AF is? I mean Sony's current AF tech has more or less taken the key skill out of wildllife photography.
I never once said photography is ruined by automation. You said that. I just said I don't think large-sensor cameras can make significant gains with computational photography, which I meant the subset of automation that is using post-capture processing to produce an image. AF tech, which is precapture, is an entirely different story in my opinion, and even there the latest AF systems on flagship cameras are already pretty good IMO.
I think an elephant is the room as that we humans "see" to "do". And computers can "see". So the emphasis on great reproduction is ok but sufficiently well achieved by most cameras, but "cameras" cannot "see". The closes we have is Eye AF, who's sole purpose is where to focus. There are way way more cameras being done every day that 10 years ago...but they are not just for displaying a photo to a human. Visual information can now be "processed" and interpreted. If anything, it's the era where photos are produced for computers to do things on our behalf. The application are endless, but the summary remains the same: when computers can "see" and understand, photography is not just for us.
 
I feel like this contradicts your overall reasoning. You're saying that photography is ruined by automation but isn't that what AF is? I mean Sony's current AF tech has more or less taken the key skill out of wildllife photography.
I never once said photography is ruined by automation. You said that. I just said I don't think large-sensor cameras can make significant gains with computational photography, which I meant the subset of automation that is using post-capture processing to produce an image. AF tech, which is precapture, is an entirely different story in my opinion, and even there the latest AF systems on flagship cameras are already pretty good IMO.
I agree that CP and image enhancement has much less effect if the data quality is already good. An FF camera at low ISO looks pretty amazing already, and many of the defects are below visible levels at normal viewing distances.

It's much easier to make a slow car 50% faster than to make the same improvement in a car that's already fast.

However, machine learning is already improving techniques for image upscaling, sharpening and noise reduction (check out the Topaz tools suite).

I think there are possibilities to make better high ISO images, for instance, with less obvious smearing, as well as better anti-aliasing and moire corrections.

However, why add these to the camera when they are already available on a PC or Mac if we need them? We don't have to keep paying for them each time we buy a new camera, and a PC has the number crunching potential to work with much larger images and produce much better results if we are prepared to wait. Topaz AI Gigapixel will get the fans running at full blast for several minutes on a large image. This would be impossible to do on a camera or phone processor.
 
Why? Smartphones have to resort to that because it would be kind of difficult to ft a zoom lens - say 18-135mm - in them...
 
This is what I am saying. ToF exists since the time the original Star Wars was released. Stereoscopic over 100 years. Innovations of long ago. However, a mobile camera ToF, given the size and power constraints, has a resolution of about 300x200 if lucky. Can you increase it? Sure...you will need the sapce that a kind of FF camera may provide. The tiny sensor for Lidar on a phone will always be crappy because it still has a tiny sensor and is still trying to infer where light is coming from. You basically have the same problem...the tiny sensor.

However, the innovations are other uses of Lidar: authentication, object recognition, self-driving, gestured interfaces, and so many others.

On the other hand, ToF is a kind of camera in itself. Instead of color and light intensity at x:y positions, it cares about depth at x:y positions. It still needs a sensor. And a tiny phone will always have a tiny ToF sensor and amount of light it can emit.

Physics is so stubborn...can't it realize the faith of the small-minded faketographers?
I was at the megamall the other day, beautiful 3D scene with layers of lights hanging from the ceiling. My 18-55 f/2.8-4 failed miserably at capturing that depth, ended up a big mess of bright dots.The 56/1.2 did a lot better, also made me think that this is one scenario a ToF sensor will struggle, and not sure how advanced an algorithm needs to be to simulate the bokeh.



65378532ad5c4e2fb6b101c6e2ed143d.jpg





30cca54bee704f73a6d6a21220a3b5fe.jpg
 
However, a mobile camera ToF, given the size and power constraints, has a resolution of about 300x200 if lucky.
I was at the megamall the other day, beautiful 3D scene with layers of lights hanging from the ceiling. My 18-55 f/2.8-4 failed miserably at capturing that depth, ended up a big mess of bright dots.The 56/1.2 did a lot better, also made me think that this is one scenario a ToF sensor will struggle, and not sure how advanced an algorithm needs to be to simulate the bokeh.
Great sample pictures of a very challenging scene. Is this a fuji camera and zoom? If so, it is a very good zoom. This scene is the typical nightmare. Notice how the lights have red halos at one layer, then develop green halos. That's longitudinal chromatic aberration. The other photo, you seee the onion-like rings in each specular light. This is possibly the multiple aspherical elements of the very very fast lens you've got. They all have these pattern.

A mobile attempt to fake bokeh should be even more fun, although what it would have right is a slower f-stop. It will never be able to fake bokeh something like this. It would be a disaster.
 

Keyboard shortcuts

Back
Top