Is Fujifilm ever going to fix their app?

If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄
 
Last edited:
If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄

I don’t think Fuji’s pockets are deep enough to compete on the computational stuff.

The iPhone continually amazes me with how well it can nail an exposure.

The only things keeping me from shooting on an iPhone exclusively are fake-ish portraits and overly cooked JPGs/HEICs. If they work those out, then my days of purchasing dedicated cameras may be over.
 
If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄
I don’t think Fuji’s pockets are deep enough to compete on the computational stuff.

The iPhone continually amazes me with how well it can nail an exposure.

The only things keeping me from shooting on an iPhone exclusively are fake-ish portraits and overly cooked JPGs/HEICs. If they work those out, then my days of purchasing dedicated cameras may be over.
Apple and Google are knee deep in robotics and computer vision. They are not developing "computational photography" for someone shooting snapshots on their cameras. They have poured truckloads full of money into the technology for such things as robots, self driving cars, etc. That's where the big bucks are. Of course if there is a spin off that can make their small sensor camera on the phone look better - then by all means they will use it. But no one is going to spend the type of money going into computer vision on a camera unless that camera is part of a robot or self driving car or self driving tank or self driving attack drone, etc. . There is not enough return to pay for it.

The cruise on my wife's new car will not only keep the car at a given speed, it will maintain a fixed distance from the car in front and it keep the car between the lines. I will steer the car to do that. That is the use of computation photography - to support that type of automation through computer vision and fusing multiple cameras and other sensors. The US department of defense is pouring tons of money into these technologies.


Computer vision is the foundation of the robotics of the future.
 
If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄
I don’t think Fuji’s pockets are deep enough to compete on the computational stuff.

The iPhone continually amazes me with how well it can nail an exposure.

The only things keeping me from shooting on an iPhone exclusively are fake-ish portraits and overly cooked JPGs/HEICs. If they work those out, then my days of purchasing dedicated cameras may be over.
Apple and Google are knee deep in robotics and computer vision. They are not developing "computational photography" for someone shooting snapshots on their cameras. They have poured truckloads full of money into the technology for such things as robots, self driving cars, etc. That's where the big bucks are. Of course if there is a spin off that can make their small sensor camera on the phone look better - then by all means they will use it. But no one is going to spend the type of money going into computer vision on a camera unless that camera is part of a robot or self driving car or self driving tank or self driving attack drone, etc. . There is not enough return to pay for it.

The cruise on my wife's new car will not only keep the car at a given speed, it will maintain a fixed distance from the car in front and it keep the car between the lines. I will steer the car to do that. That is the use of computation photography - to support that type of automation through computer vision and fusing multiple cameras and other sensors. The US department of defense is pouring tons of money into these technologies.

https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge

Computer vision is the foundation of the robotics of the future.
Yeah, i have to disagree with that.

It would be a really bad idea to use AI enchanced imaginery for any military decision, especially if they produce artifacts etc. These algoritms are only useful for beautifying consumer photos.

I can't speak for apple, but i have read the development diary for the Google Camera.

The algoritms google uses were made with Google camera in mind, not a byproduct of the stuff you are talking about. Tell me, why would a military drone need fake background blur on it's images???

The night sight algoritm was made by a guy in his basement to better smartphone photos, and then it was improved by google. It was not designed military first.

I understand your comment, but imho it doesn't apply here.

Computer vision and computational photography for smartphone cameras don't come from the same place. Sure, there are some algorithms that can be applied to both, but a lot of smartphone algoritms are useless for military or robotics.
 
If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄
I don’t think Fuji’s pockets are deep enough to compete on the computational stuff.

The iPhone continually amazes me with how well it can nail an exposure.

The only things keeping me from shooting on an iPhone exclusively are fake-ish portraits and overly cooked JPGs/HEICs. If they work those out, then my days of purchasing dedicated cameras may be over.
Apple and Google are knee deep in robotics and computer vision. They are not developing "computational photography" for someone shooting snapshots on their cameras. They have poured truckloads full of money into the technology for such things as robots, self driving cars, etc. That's where the big bucks are. Of course if there is a spin off that can make their small sensor camera on the phone look better - then by all means they will use it. But no one is going to spend the type of money going into computer vision on a camera unless that camera is part of a robot or self driving car or self driving tank or self driving attack drone, etc. . There is not enough return to pay for it.

The cruise on my wife's new car will not only keep the car at a given speed, it will maintain a fixed distance from the car in front and it keep the car between the lines. I will steer the car to do that. That is the use of computation photography - to support that type of automation through computer vision and fusing multiple cameras and other sensors. The US department of defense is pouring tons of money into these technologies.

https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge

Computer vision is the foundation of the robotics of the future.
Yeah, i have to disagree with that.

It would be a really bad idea to use AI enchanced imaginery for any military decision, especially if they produce artifacts etc. These algoritms are only useful for beautifying consumer photos.

I can't speak for apple, but i have read the development diary for the Google Camera.

The algoritms google uses were made with Google camera in mind, not a byproduct of the stuff you are talking about. Tell me, why would a military drone need fake background blur on it's images???

The night sight algoritm was made by a guy in his basement to better smartphone photos, and then it was improved by google. It was not designed military first.

I understand your comment, but imho it doesn't apply here.

Computer vision and computational photography for smartphone cameras don't come from the same place. Sure, there are some algorithms that can be applied to both, but a lot of smartphone algoritms are useless for military or robotics.
Of course it does. The first step in any algorithm to soften background starts with locating, identifying and establishing the outline and boundaries and pattern recognition of the objects one wants to keep sharp and the so they can be enhanced or the other portions softened. That is exactly the applications developed for military applications and robotics. The basic statistical and mathematical foundations goes back to long before smart phones or were a gleam in Steve Jobs' eyes.

The Hough transform dates back to the 1960's and the Fourier transform dates back to the 1800's. The structure tensor is a foundational concept in image processing. The impetus for image processing actually dates by the analyzing data from a cloud chamber and bubble chamber used to detect elementary particles in physics. The images produced and the computer analysis led to the discovery of the W and Z bosons came from the computer analysis of the images produced in about 1980.


These techniques have been around around time from military application such as missile guidance and target detection, target identification, navigation of pilotless vehicles, ISR (intelligence, surveillance and reconnaissance) and it has been under development since the early 1950's. The University of New Mexico established the EDAC (Earth Data Analysis Center) in 1964 under the sponsorship of NASA to transfer of government developed image processing technology to the wider audience. ERIM was established at the University of Michigan by the US Air Force for the same goal. They have been around a lot longer than such tools as Tensor Flow.

What Apple and others did was take a vast array of well established techniques and implement them in their devices as the processing technology permitted. However, they have been around for a long - long time.
 
It is ridiculous. How expensive are one or two java developpers? All the younger customers that are used to Smartphones and try out these apps...
 
If Fuji doesn’t drastically upgrade all their software, computational photography will devour them. That’s the present and the future.
It might go a bit slower if they would try to move with technology. We all remember Kodak, don’t we?

Anyhow, I tend to edit my stuff on another device, but my wife like to send photos directly to her phone etc.

I would be the first in line if they put the X100V sensor in a phone 😄
I don’t think Fuji’s pockets are deep enough to compete on the computational stuff.

The iPhone continually amazes me with how well it can nail an exposure.

The only things keeping me from shooting on an iPhone exclusively are fake-ish portraits and overly cooked JPGs/HEICs. If they work those out, then my days of purchasing dedicated cameras may be over.
Apple and Google are knee deep in robotics and computer vision. They are not developing "computational photography" for someone shooting snapshots on their cameras. They have poured truckloads full of money into the technology for such things as robots, self driving cars, etc. That's where the big bucks are. Of course if there is a spin off that can make their small sensor camera on the phone look better - then by all means they will use it. But no one is going to spend the type of money going into computer vision on a camera unless that camera is part of a robot or self driving car or self driving tank or self driving attack drone, etc. . There is not enough return to pay for it.

The cruise on my wife's new car will not only keep the car at a given speed, it will maintain a fixed distance from the car in front and it keep the car between the lines. I will steer the car to do that. That is the use of computation photography - to support that type of automation through computer vision and fusing multiple cameras and other sensors. The US department of defense is pouring tons of money into these technologies.

https://en.wikipedia.org/wiki/DARPA_Robotics_Challenge

Computer vision is the foundation of the robotics of the future.
Yeah, i have to disagree with that.

It would be a really bad idea to use AI enchanced imaginery for any military decision, especially if they produce artifacts etc. These algoritms are only useful for beautifying consumer photos.

I can't speak for apple, but i have read the development diary for the Google Camera.

The algoritms google uses were made with Google camera in mind, not a byproduct of the stuff you are talking about. Tell me, why would a military drone need fake background blur on it's images???

The night sight algoritm was made by a guy in his basement to better smartphone photos, and then it was improved by google. It was not designed military first.

I understand your comment, but imho it doesn't apply here.

Computer vision and computational photography for smartphone cameras don't come from the same place. Sure, there are some algorithms that can be applied to both, but a lot of smartphone algoritms are useless for military or robotics.
Of course it does. The first step in any algorithm to soften background starts with locating, identifying and establishing the outline and boundaries and pattern recognition of the objects one wants to keep sharp and the so they can be enhanced or the other portions softened. That is exactly the applications developed for military applications and robotics. The basic statistical and mathematical foundations goes back to long before smart phones or were a gleam in Steve Jobs' eyes.

The Hough transform dates back to the 1960's and the Fourier transform dates back to the 1800's. The structure tensor is a foundational concept in image processing. The impetus for image processing actually dates by the analyzing data from a cloud chamber and bubble chamber used to detect elementary particles in physics. The images produced and the computer analysis led to the discovery of the W and Z bosons came from the computer analysis of the images produced in about 1980.

https://en.wikipedia.org/wiki/Bubble_chamber

These techniques have been around around time from military application such as missile guidance and target detection, target identification, navigation of pilotless vehicles, ISR (intelligence, surveillance and reconnaissance) and it has been under development since the early 1950's. The University of New Mexico established the EDAC (Earth Data Analysis Center) in 1964 under the sponsorship of NASA to transfer of government developed image processing technology to the wider audience. ERIM was established at the University of Michigan by the US Air Force for the same goal. They have been around a lot longer than such tools as Tensor Flow.

What Apple and others did was take a vast array of well established techniques and implement them in their devices as the processing technology permitted. However, they have been around for a long - long time.
You proved me wrong.

But i'll still stay with my point that google and apple have also units that work with strictly smartphone image processing nowadays, that don't care with selling their work to the millitary. More money in that in some applications.
 

Keyboard shortcuts

Back
Top