Fujifilm & Computational Photography

Batdude

Veteran Member
Messages
7,274
Solutions
9
Reaction score
5,267
Location
US
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.

Are they the only ones that will have that technology? Does it mean that I can do that with my old rinky dinky XT1 or will it involve having to buy the latest and greatest Fujifilm camera? What about other camera brands? Can someone explain this in an easier language to get a better idea?

Thank you :-)
 
Computational photography is what all new cell phone these days do to get the amazing photos they take. The phones are powerful computers and using that power they can can do things with images that ordinary cameras can’t. Like taking lovely pics in the dark by rapidly merging and color matching, etc, 10, 20 or maybe more pics almost instantaneously.

I think camera makers were caught off guard by the power of computational computing. They didn’t see it coming and now are struggling because phone cameras have gotten so good.

you’ll need a new camera that isn’t made yet. Your camera doesn’t have the horsepower. None of them do yet. Hopefully all the camera makers are working on this. It should be the next big thing for all mirrorless. It could lead to dramatically better photos in certain instances.
 
Computational photography is what all new cell phone these days do to get the amazing photos they take. The phones are powerful computers and using that power they can can do things with images that ordinary cameras can’t. Like taking lovely pics in the dark by rapidly merging and color matching, etc, 10, 20 or maybe more pics almost instantaneously.

I think camera makers were caught off guard by the power of computational computing. They didn’t see it coming and now are struggling because phone cameras have gotten so good.

you’ll need a new camera that isn’t made yet. Your camera doesn’t have the horsepower. None of them do yet. Hopefully all the camera makers are working on this. It should be the next big thing for all mirrorless. It could lead to dramatically better photos in certain instances.
Thanks miketala. Yes I understand what computational photography is, that's not what I asked though. But in your last paragraph I see that you mention "a new camera that isn't made yet", so yeah I can see how the camera will require to be pretty "powerful", but then again cell phones are soooooooo darn small yet they are so powerful already.

How would computational work, special in-camera software only? Is that something that will be specifically designed for one particular camera brand? What about our existing RAW converters, how would that function work along with it? Is all this a complete unknown mystery or are some of you already very familiar with it?
 
Last edited:
you’ll need a new camera that isn’t made yet. Your camera doesn’t have the horsepower. None of them do yet. Hopefully all the camera makers are working on this. It should be the next big thing for all mirrorless. It could lead to dramatically better photos in certain instances.
I don't know about fuji, but my wild guess is that if so we would see something from Sony first.

But they do feel the heat - and a lot.

There are two options, either they abandon the most non-pro markets and focus on pro-market only, or try to keep up with cellphone tech - which would be impossible.

Fuji, Sony, Pentax, Canon etc, were fine when they competed between themselves, but to compete with Apple and Samsung - that's a different story.

Let's face it for most non-pro a physical camera when they also carry a phone with really good camera and a far better connectivity convenience is just hugely redundant or outright undesirable. Snap photo and post it on Instagram all within 2 sec and with so crazy filters.

I would not be able to make my daughter take a normal camera even if I pay her. And she has iPhone 6.

I bought gopro 9 and it produces buttery smooth stabilization in 4K, including horizon lock without any mechanical parts and on SoC that is already old. I honestly like to take it out for random video much more than any Fuji.

The computation is the way to go - but adding computation to normal cameras makes them even more expensive and power-hungry. And also someone has to develop the firmware. In all these companies the software part is the smallest group they have. Literally just a few people. There are probably more people working in their cafeteria.

And we know, some companies like Sony , their track record of firmware updates is just abysmal for the majority of models It is mostly, "buy a new camera if you want to update".

That wouldn't work with a computational crowd... I am not burying them yet, but even in my local Best Buy, they kinda now hide most cameras and display a few A7 and some Canons and that's mostly it. I am not sure they even have fuji anymore.

The local Henry's photo store closed down during covid - for good. I bet more will be closing. I don't even check what's new in new fuji cameras or cameras in general. My T3 far surpasses my need for a big system camera. There is nothing that would make me to plunge $1500 for something new - after a certain time (last 3-4 years) all the features are just a window dressing. Cameras peaked. But hoinesly, I am looking at that iPhone max 13....

So there is more than just that. And with COVID the sales of non-pro cameras basically tanked.
 
Last edited:
you’ll need a new camera that isn’t made yet. Your camera doesn’t have the horsepower. None of them do yet. Hopefully all the camera makers are working on this. It should be the next big thing for all mirrorless. It could lead to dramatically better photos in certain instances.
I don't know about fuji, but my wild guess is that if so we would see something from Sony first.
So you don't think the XH2 will be near capable of that?
But they do feel the heat - and a lot.

There are two options, either they abandon the most non-pro markets and focus on pro-market only, or try to keep up with cellphone tech - which would be impossible.

Fuji, Sony, Pentax, Canon etc, were fine when they competed between themselves, but to compete with Apple and Samsung - that's a different story.

Let's face it for most non-pro a physical camera when they also carry a phone with really good camera and a far better connectivity convenience is just hugely redundant or outright undesirable. Snap photo and post it on Instagram all within 2 sec and with so crazy filters.
I completely understand that because I have started doing that as well. These days whenever we go out on a trip I hardly ever worry about taking my camera(s) because we all have pretty darn decent cell phones so what's the point.

Years ago when I was still in the Nikon DX forum I said this, that they HAVE to do something about cell phone/internet capabilities, most people laughed at me and called it dumb, and I'm actually REALLY surprised that no one specially the main big camera manufacturers have done anything about it. Yeah there is no competition now. Oh well that's their own fault.

The cell phone industry has created "zombies" and people are literately possessed by it. Everyone is looking down all the time is really easy to rob a bank these days. I guess maybe it was a good thing that those things were not put into our cameras? :-)
And I have gopro 9 and it produces buttery smooth stabilization in 4K, including horizon lock without any mechanical parts and on SoC that is already old. I honestly like to take it out for random video much more than any Fuji.

The computation is the way to go - but adding computation to normal cameras makes them even more expensive and power-hungry. And also someone has to develop the firmware. In all these companies the software part is the smallest group they have. Literally just a few people. There are probably more people working in their cafeteria.

And we know, some companies like Sony , their track record of firmware updates is just abysmal for the majority of models It is mostly, "buy a new camera if you want to update".

That wouldn't work with a computational crowd...

So there is more than just that. And with COVID the sales of non-pro cameras basically tanked.
 
Last edited:
I’ll take real bokeh and real Gaussian blur over simulated blur any day.

I want to be in control and involved in the process of making a photo rather than an algorithm and computer.



Otherwise, why do photography?
 
Last edited:
I see you have been paying a lot of attention to your Z9 preorder 😉 . I am really surprised computational photography has suddenly become an alien subject to you 🤭...Hope someone with latest cell phone can help us understand what it is.
 
I see you have been paying a lot of attention to your Z9 preorder 😉 . I am really surprised computational photography has suddenly become an alien subject to you 🤭...Hope someone with latest cell phone can help us understand what it is.
Sorry you completely lost me there what Z9 preorder?
 
I’ll take real bokeh and real Gaussian blur over simulated blur any day.

I want to be in control and involved in the process of making a photo rather than an algorithm and computer.

Otherwise, why do photography?
That is so true.
 
I’ll take real bokeh and real Gaussian blur over simulated blur any day.

I want to be in control and involved in the process of making a photo rather than an algorithm and computer.

Otherwise, why do photography?
Yep I'm with you on that, I hope to see Fujifilm improve but in making the cameras smarter in areas to help photographers, in better AF, subject detection etc.

Not into that stupid Luminar type stuff like sky replacement, inserting fake sun rays etc.

Smart phones largely have to make up for the shortcomings of their small sensors and limited lenses. Plus they're marketing towards the social media junkies, I don't think any camera manufacturer can win that battle.
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.

Are they the only ones that will have that technology? Does it mean that I can do that with my old rinky dinky XT1 or will it involve having to buy the latest and greatest Fujifilm camera? What about other camera brands? Can someone explain this in an easier language to get a better idea?

Thank you :-)
So we are agreed that computational photography needs three things that Fujifilm don't currently have: fast sensors, lots of processing power and software resources.

Given advances in those three areas I'd like them to concentrate on other areas before computational photography such as autofocus. They could also save cost and space by getting rid of the mechanical shutter.
 
Last edited:
Computational photography has been around long enough that I would say it's more an issue with hardware on mirrorless systems than firmware that's been holding things back since the larger the sensor the more computing power the AI in the phone/camera needs to process it.

This is also why I highly doubt we'll see any FF camera have it before a MFT/APS-C one does since each time you increase sensor size/resolution you must also increase the amount of computing power to manage things which is why for now it's only been a thing on phones that have much smaller sensors. Not only that but you have to have a lightning fast readout from your sensor for it to actually hit that processing power in a timely manner, which again the more information that sensor is transferring the more throughput you need to account for.

Here's where things get interesting though. Stacked sensors are actually better when it comes to increasing readout speeds than increasing resolution since stacking a sensor frees up a lot of space you can fill with RAM for way better readout speeds which Sony and Nikon have mostly been using to increase AF performance because the rest of the processing power simply wasn't there to add in computational as well with those larger sensors.

Pure speculation ahead

Now with the X-H2 rumored to have 2 different sensors (one at 26 and one at 40) along with the distinguishing feature of the X-H1 having dual processors compared to the X-T series it's quite possible that the 26MP version of the X-H2 will be able to perform some forms of computational photography while the 40MP X-H2 one can't. Splitting things that way between the two models actually makes a lot of sense since one version of the X-H2 would be specialized for types of photography that wants maximum resolution like weddings, portraiture, landscape, and the like while the other would be specialized for photography that wants incredibly fast autofocus and framerate performance like sports, action and even video.

That's also the only reason I can think of for the two separate versions of the X-H2 with two fairly different sensor MP specs since a "budget" option of the X-H2 would just be the X-T5 and wouldn't make a whole lot of sense unless Fuji was planning to kill off the X-T line, which again doesn't make sense.
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.
Let's see. My nearly 10yo Fuji X-S1 bridge camera can take multiple exposures at high ISO, align them and make a low noise composite image. It can simulate background blur again by taking multiple exposures, some in focus and some not, and combining them in camera. Does the panorama mode count as computational photography? It has got that too.

Oh, and it also has a powerful AI engine. It can detect and recognise people's faces and then tag photos with their names. It will trip the shutter only when the cat is looking at the camera. It optimises JPEG settings to match the current scene and it automatically selects white balance and exposure parameters. It is very clever.

Fuji had this technology a decade ago.

Modern phones can easily do all of the above. On top of that they can also combine images that come from different cameras on their back - something the single-lens-single-sensor camera cannot possibly do.

What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.

To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
 
Last edited:
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.
Let's see. My nearly 10yo Fuji X-S1 bridge camera can take multiple exposures at high ISO, align them and make a low noise composite image. It can simulate background blur again by taking multiple exposures, some in focus and some not, and combining them in camera. Does the panorama mode count as computational photography? It has got that too.

Oh, and it also has a powerful AI engine. It can detect and recognise people's faces and then tag photos with their names. It will trip the shutter only when the cat is looking at the camera. It optimises JPEG settings to match the current scene and it automatically selects white balance and exposure parameters. It is very clever.

Fuji had this technology a decade ago.

Modern phones can easily do all of the above. On top of that they can also combine images that come from different cameras on their back - something the single-lens-single-sensor camera cannot possibly do.

What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.

To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
Good points!

I remember cameras 5-10 years ago having all these gimmicks, but then people started to correlate these features with lowend cameras, and manufacturers slowly started removing them.

But suddenly you add fancy words like machine learning to the exactly same words again, and it is cool again somehow.
 
It could lead to dramatically better photos in certain instances.
I'm around here for more than 20 years now, but in all honesty, the technical progress in this time didn't lead to dramatically better images :-D

(sorry for beeing a little cynical)
Better or not, today having 600mm tele with fast AF, and fast burst mode in body - we can take sharp shoots of flying bird or runnng football player impossible to take 20-30 years ago. With small GoPro mounted on your arm, chest or helmet you can record extreme activities impossible to get before (due to the cost or size). Having drone, you don't need to hire a heli - That's the technical advantage.

Of course i doesn't mean we're taking better photos in general, but now it's much more convenient and affordable (For me - one of the best things in digital photography is ability to switch sensivity at any time during mountain trip when you visiting caves and open terrains at the same time).

Cheers,

Artur
 
Not into that stupid Luminar type stuff like sky replacement, inserting fake sun rays etc.
Em...Are iPhone 13 ‘Cinematic Mode’ also consider useless fake stuff?

IMO Luminar concept is good, but they unable provide smooth & easy workflow/UI (since performance too slow & UI not enough easy use like iPhone Apps). Luminar need learn from Apple in term of UI and user experience.
 
The lens profiles and film sims are computational photography. If you use a RAW converter that has the ability to turn off the lens profile, do so and enjoy the honor show!

Morris
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.
Let's see. My nearly 10yo Fuji X-S1 bridge camera can take multiple exposures at high ISO, align them and make a low noise composite image. It can simulate background blur again by taking multiple exposures, some in focus and some not, and combining them in camera. Does the panorama mode count as computational photography? It has got that too.

Oh, and it also has a powerful AI engine. It can detect and recognise people's faces and then tag photos with their names. It will trip the shutter only when the cat is looking at the camera. It optimises JPEG settings to match the current scene and it automatically selects white balance and exposure parameters. It is very clever.

Fuji had this technology a decade ago.

Modern phones can easily do all of the above. On top of that they can also combine images that come from different cameras on their back - something the single-lens-single-sensor camera cannot possibly do.

What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.

To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
Good points!

I remember cameras 5-10 years ago having all these gimmicks, but then people started to correlate these features with lowend cameras, and manufacturers slowly started removing them.

But suddenly you add fancy words like machine learning to the exactly same words again, and it is cool again somehow.
The difference - I believe - is that these gimmicks used to be specifically programmed into the camera - the rules for focus stacking (for example) can be defined and the camera simply follows them.

With machine learning the algorithms have been trained against a large number of existing images rather than being specifically written by the programmer.

We have been using similar techniques for fraud detection and marketing for many decades - but it used to require supercomputers to do it.
 
What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.
Yeah... Above still considering new generation computational photography? These are old stuff.

IMO new generation computational photography/videography should be optical solution powered by AI.

Example 1 : electronic variable ND filter

Sony FX6 and $399 Z Cam eND module provide electronic variable ND filter - Auto ND mode : camera AI will adjust the image brightness at the optimal level. It means Electronic Variable ND Filter (optical-base ND powered by AI) takes care of exposure and consumer only need manually adjust depth of field via aperture ring.

These electronic variable ND filter are based on liquid crystal (LC) technology licensing from LC Tech - control the light transmittance by an externally applied drive voltage on transparent LCD panel.


IMO It can advance into electronic graduated variable ND filter, allow consumer choose graduated pattern via Q menu / auto graduated (AI) according highlights.

Example 2 : DJI Ronin 4D's Automated Manual Focus (AMF) mode with LIDAR

DjI Ronin 4D will feature three different autofocus modes: manual focus, autofocus and a new Automated Manual Focus (AMF) mode.

Similar to car autopilot features, which AI auto turn wheel, but driver stills override autopilot via manual turn wheel. AMF mode will auto track subjects (auto tracking focus) and turn the focus wheel during recording, with the option for the camera operator to jump in and manually pull focus when needed. To help in manual focus and AMF modes, there will be LiDAR Waveform available on the monitor to help cinematographers ‘locate focus points and pull focus with extreme precision.’

 

Keyboard shortcuts

Back
Top