Fujifilm & Computational Photography

Batdude

Veteran Member
Messages
7,274
Solutions
9
Reaction score
5,267
Location
US
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.

Are they the only ones that will have that technology? Does it mean that I can do that with my old rinky dinky XT1 or will it involve having to buy the latest and greatest Fujifilm camera? What about other camera brands? Can someone explain this in an easier language to get a better idea?

Thank you :-)
 
IMO X-H2 will focus on videography due to larger body size (with heatsink?) & stronger IBIS.

From past experiences on Fujifilm "good enough" smartphone connectivity apps & "real good" film simulation, I believe Fuji style computational videography will focus on provide advance film simulation for reduce effort of manually color correction in DaVinci Resolve (or other video editor).

May be similar tech as FiLMiC image remapping technology via LUTs.


Based on X100V built-in ND filter experience, Fujifilm can built-in film-based ND filter on X-H2.

P.s. DJI Ronin 4D has 9-stop built-in high-quality ND filters (ND 2 to ND 512, or ND 0.3 to ND 2.7) that can be quickly and easily switched internally.
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean?
it means that marketing department in Fuji needs you to shell out some more moolah...
 
Last edited:
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.
Let's see. My nearly 10yo Fuji X-S1 bridge camera can take multiple exposures at high ISO, align them and make a low noise composite image. It can simulate background blur again by taking multiple exposures, some in focus and some not, and combining them in camera. Does the panorama mode count as computational photography? It has got that too.

Oh, and it also has a powerful AI engine. It can detect and recognise people's faces and then tag photos with their names. It will trip the shutter only when the cat is looking at the camera. It optimises JPEG settings to match the current scene and it automatically selects white balance and exposure parameters. It is very clever.

Fuji had this technology a decade ago.

Modern phones can easily do all of the above. On top of that they can also combine images that come from different cameras on their back - something the single-lens-single-sensor camera cannot possibly do.

What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.

To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
Good points!

I remember cameras 5-10 years ago having all these gimmicks, but then people started to correlate these features with lowend cameras, and manufacturers slowly started removing them.

But suddenly you add fancy words like machine learning to the exactly same words again, and it is cool again somehow.
The difference - I believe - is that these gimmicks used to be specifically programmed into the camera - the rules for focus stacking (for example) can be defined and the camera simply follows them.

With machine learning the algorithms have been trained against a large number of existing images rather than being specifically written by the programmer.

We have been using similar techniques for fraud detection and marketing for many decades - but it used to require supercomputers to do it.
The real issue in focus stacking and HDR stacking, etc., is is the precision stacking. The images have to be precisely coregistered prior to any processing. That is they have to be precisely aligned to produce a resulting final image with the loss of resolution. That takes significant processing power. That's the reason most of these features have come with significant restrictions. Capture One 22 released a significant new feature where it can auto align (coregister) multiple images to produce an DNG raw for further processing which doesn't require a tripod. It works faster with a tripod but a tripod is not absolutely necessary.

No "machine learning" or "AI" is necessary because the image processing required has been around a long time. FCI, Kodak, Perkin Elmer, Ball Aerospace and many other defense contractors along with academic research labs such as MIT Lincoln Labs, Johns Hopkins Applied Physics Lab, Georgia Tech Research Institute, etc. have been developing these technologies since the late 1970's in not only the visible spectrum but in IR, UV, Xray spectrum and for imaging radar.

However, significant processing horse power is required. It may well be that a refresh of the 26 MP sensor along with updated or dual processors will provide the horsepower. What took a super computer in 1980 can be accomplished on an iPhone today. We will soon see.
 
To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
I want Remove Tourists and Remove Powerlines modes 😉
 
It could lead to dramatically better photos in certain instances.
I'm around here for more than 20 years now, but in all honesty, the technical progress in this time didn't lead to dramatically better images :-D

(sorry for beeing a little cynical)
 
I’ll take real bokeh and real Gaussian blur over simulated blur any day.

I want to be in control and involved in the process of making a photo rather than an algorithm and computer.

Otherwise, why do photography?
Yep I'm with you on that, I hope to see Fujifilm improve but in making the cameras smarter in areas to help photographers, in better AF, subject detection etc.
100% Agree on this.
Not into that stupid Luminar type stuff like sky replacement, inserting fake sun rays etc.
This kind of stuff may be perceived as creative tool used in order to achieve "artistic vision", but it's not just a photography anymore - at lest for me.

Postprocessing is nothing new, even i film era we corrected perspective, exposure, sometimes even mixed frames on the final picture. Today digital postprocessing makes these things much easier and give extra possibilities.

Anyway we could discuss, where to draw the line of the pure photography but it's pointless - I think the most important thing the perception of the final result.

As long as it looks natural and I can perceive it as real photo - it's fine with me.In the past did a lot of architectural visualizations for commercial purposes and my sensivity in this matter is above average (at least I think so).
Smart phones largely have to make up for the shortcomings of their small sensors and limited lenses. Plus they're marketing towards the social media junkies, I don't think any camera manufacturer can win that battle.
I'm not going to generalize, but I think most of the smartphone shooters are OK with creative approach, easiness of use and the whole computational stuff.

We're buying fast lenses in order to get shallow DOF, bokeh etc. while quick filters in smartphone can do more or like the same - not perfetly yet, but it's a matter of time (who would like to carry heavy lenses if smart body could do the job almost perfectly)

Regardless we like it or not, computational photography is a future, how we will utilize it - that is a qestion.

Cheers,

Artur
 
What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.
Yeah... Above still considering new generation computational photography? These are old stuff.

IMO new generation computational photography/videography should be optical solution powered by AI.

Example 1 : electronic variable ND filter

Sony FX6 and $399 Z Cam eND module provide electronic variable ND filter - Auto ND mode : camera AI will adjust the image brightness at the optimal level. It means Electronic Variable ND Filter (optical-base ND powered by AI) takes care of exposure and consumer only need manually adjust depth of field via aperture ring.

These electronic variable ND filter are based on liquid crystal (LC) technology licensing from LC Tech - control the light transmittance by an externally applied drive voltage on transparent LCD panel.

https://www.lc-tec.se/variable-nd-filters/

IMO It can advance into electronic graduated variable ND filter, allow consumer choose graduated pattern via Q menu / auto graduated (AI) according highlights.

Example 2 : DJI Ronin 4D's Automated Manual Focus (AMF) mode with LIDAR

DjI Ronin 4D will feature three different autofocus modes: manual focus, autofocus and a new Automated Manual Focus (AMF) mode.

Similar to car autopilot features, which AI auto turn wheel, but driver stills override autopilot via manual turn wheel. AMF mode will auto track subjects (auto tracking focus) and turn the focus wheel during recording, with the option for the camera operator to jump in and manually pull focus when needed. To help in manual focus and AMF modes, there will be LiDAR Waveform available on the monitor to help cinematographers ‘locate focus points and pull focus with extreme precision.’

https://dpreview.com/news/4463455175/dji-announces-the-ronin-4d-the-world-first-4-axis-cinema-camera
An electronic controlled ND filter would be interesting. However, the proper solution to lack of dynamic range in the sensor is a sensor with more dynamic range. Often times even a variable ND filter does not fit the details of the scene. There is also the issue of this is an optical device and it would have to optically be of the highest quality across all settings or else it will impact the optical quality of the lens.

Lidar might be useful for video and especially for VR applications where precision mapping of distance is necessary. However, for photography I would settle with enhanced manual focusing tools and reliable object recognition and tracking along with the intuitive UI to make use of them.
 
I started a reply (to one of the previous replies) this morning but discarded it. I’m incredulous that there are people writing off the current state of computational photography as “been done before” or machine learning as “marketing”. There is reasonable debate to be had about the role of computational photography should play in our craft but to write it off completely is akin to sticking your head in the sand while claiming the earth is flat.

When it comes to technology, revolution is just the rapid rise along the bell curve of evolution. None of this is new. But the current level of processor technology, including low cost processors dedicated to neural processing, allow us to use these old ideas in new and interesting ways.

For those genuinely interested in learning more, Aaron Hockley, author of The Computer At My Photos: Artificial Intelligence and the Future of Photography discusses machine learning, AI and computational photography on a recent episode of the PhotoActive Podcast. His book is a free read if you’re a Kindle Unlimited subscriber (at least here in the US). It’s in my queue — I’ll probably start reading it this afternoon.
 
Last edited:
I started a reply (to one of the previous replies) this morning but discarded it. I’m incredulous that there are people writing off the current state of computational photography as “been done before” or machine learning as “marketing”. There is reasonable debate to be had about the role of computational photography should play in our craft but to write it off completely is akin to sticking your head in the sand while claiming the earth is flat.

When it comes to technology, revolution is just the rapid rise along the bell curve of evolution. None of this is new. But the current level of processor technology, including low cost processors dedicated to neural processing, allow us to use these old ideas in new and interesting ways.

For those genuinely interested in learning more, Aaron Hockley, author of The Computer At My Photos: Artificial Intelligence and the Future of Photography discusses machine learning, AI and computational photography on a recent episode of the PhotoActive Podcast. His book is a free read if you’re a Kindle Unlimited subscriber (at least here in the US). It’s in my queue — I’ll probably start reading it this afternoon.
Interesting - it’s also free in the UK - will take a look
 
What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.
Yeah... Above still considering new generation computational photography? These are old stuff.

IMO new generation computational photography/videography should be optical solution powered by AI.

Example 1 : electronic variable ND filter

Sony FX6 and $399 Z Cam eND module provide electronic variable ND filter - Auto ND mode : camera AI will adjust the image brightness at the optimal level. It means Electronic Variable ND Filter (optical-base ND powered by AI) takes care of exposure and consumer only need manually adjust depth of field via aperture ring.

These electronic variable ND filter are based on liquid crystal (LC) technology licensing from LC Tech - control the light transmittance by an externally applied drive voltage on transparent LCD panel.

https://www.lc-tec.se/variable-nd-filters/

IMO It can advance into electronic graduated variable ND filter, allow consumer choose graduated pattern via Q menu / auto graduated (AI) according highlights.
IMO if future ND able auto graduated certain areas (powered by AI) according highlight (bright) areas, it provide optical solution controlling highlight instead of DR400.

Since it is optical solution, the results can bring better dynamic range to RAW.
Example 2 : DJI Ronin 4D's Automated Manual Focus (AMF) mode with LIDAR

DjI Ronin 4D will feature three different autofocus modes: manual focus, autofocus and a new Automated Manual Focus (AMF) mode.

Similar to car autopilot features, which AI auto turn wheel, but driver stills override autopilot via manual turn wheel. AMF mode will auto track subjects (auto tracking focus) and turn the focus wheel during recording, with the option for the camera operator to jump in and manually pull focus when needed. To help in manual focus and AMF modes, there will be LiDAR Waveform available on the monitor to help cinematographers ‘locate focus points and pull focus with extreme precision.’

https://dpreview.com/news/4463455175/dji-announces-the-ronin-4d-the-world-first-4-axis-cinema-camera
An electronic controlled ND filter would be interesting. However, the proper solution to lack of dynamic range in the sensor is a sensor with more dynamic range.
The real issues is most of Sony sensor only support dual gain/dual native ISO. No new innovation on sensor to bring more base ISO which has maximum dynamic range.

Current electronic controlled ND filter just solution for make consumer shooting process easier via provide seamless integrated ND filter solution. Consumer can more focus on content creation and framing.
Often times even a variable ND filter does not fit the details of the scene. There is also the issue of this is an optical device and it would have to optically be of the highest quality across all settings or else it will impact the optical quality of the lens.

Lidar might be useful for video and especially for VR applications where precision mapping of distance is necessary. However, for photography I would settle with enhanced manual focusing tools and reliable object recognition and tracking along with the intuitive UI to make use of them.
DJI Ronin 4D - LiDAR Waveform (at right of screen) as visualized focus assistance tool which displays the ranging points in simplified top-down view.
DJI Ronin 4D - LiDAR Waveform (at right of screen) as visualized focus assistance tool which displays the ranging points in simplified top-down view.
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.
Let's see. My nearly 10yo Fuji X-S1 bridge camera can take multiple exposures at high ISO, align them and make a low noise composite image. It can simulate background blur again by taking multiple exposures, some in focus and some not, and combining them in camera. Does the panorama mode count as computational photography? It has got that too.

Oh, and it also has a powerful AI engine. It can detect and recognise people's faces and then tag photos with their names. It will trip the shutter only when the cat is looking at the camera. It optimises JPEG settings to match the current scene and it automatically selects white balance and exposure parameters. It is very clever.

Fuji had this technology a decade ago.

Modern phones can easily do all of the above. On top of that they can also combine images that come from different cameras on their back - something the single-lens-single-sensor camera cannot possibly do.

What radically new features this "computational photography" can possibly offer? HDR? Focus stacking? Super resolution pixel shift? Fuji got most of that already, while Olympus seems to be running out of ideas what next feature to introduce.

To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
Good points!

I remember cameras 5-10 years ago having all these gimmicks, but then people started to correlate these features with lowend cameras, and manufacturers slowly started removing them.

But suddenly you add fancy words like machine learning to the exactly same words again, and it is cool again somehow.
This thread has got me thinking, computational photography might be an excuse to manufacture cheaper lenses, and to just “correct” the defects in post-processing.



They already do this, but it might take on an even larger role in the future. Thoughts?
 
Good points!

I remember cameras 5-10 years ago having all these gimmicks, but then people started to correlate these features with lowend cameras, and manufacturers slowly started removing them.

But suddenly you add fancy words like machine learning to the exactly same words again, and it is cool again somehow.
Marketing always like put fancy words :

Deep learning AF

Previously, programmers manually write AF algorithm/logic.

Currently, AF algorithm too complicated for human written. So, R&D department try use AI (just input different scenarios data, then analysis it via AI for figure out common pattern) for composite AF algorithm/logic.

So, AI just a tools used to write program in R&D department, not used inside camera.

EVF & Rear LCD resolution
  • 640x480x3=920K
  • 800x600x3=1.44m
  • 1280x960x3=3.68m
  • X-T30 : 720x480x3=1.0368m dots
  • X-T30 II : 900x600x3=1.62m dots
P.s. $270 Samsung Galaxy A32 smartphone has 6.4 inches 1080 x 2400 pixels Super AMOLED.

Dual-processor (used in Nikon Z6 II / Z7 II)

No fund for R&D new processor. So, just put 2 unit identical existing processor, each fixed to handle different function. Compare with new processor, dual processor drain more battery power & produce more heat for same performance.

It different with multi-processor design in server, server will auto relocate task to processor which free.

Stacked sensor

Stacked sensor has been used in smartphone sensor.

Samsung stacked smartphone sensor consists of two chips: a 12MP backside-illuminated (BSI) pixel chip on the top that uses 65nm process and a bottom chip for analog and logic circuits that uses 14nm process. By using the super-fine 14nm process on the processing layer, Samsung says it could achieve a 29% drop in power consumption compared to current conventional sensors that use a 65nm/28nm process.

https://dpreview.com/news/218354003...power-efficiency-density-mobile-image-sensors

Similar faster smartphone processor chip build via smaller 7nm/5nm process, stacked sensor fast because built via smaller 12nm process instead of 28nm process.

$270 Samsung Galaxy A52 5G used Qualcomm Snapdragon 750G 5G (8 nm).

Rumors said core tech used stick dual chip together licensed from other company to Sony/Nikon/Canon...

Compact camera built-in ND

Small grey film in lens, refer to 22:40 in below teardown



Film simulation

Customizable filter for provide better tone mappings. Similar to LUTs used in videography.
This thread has got me thinking, computational photography might be an excuse to manufacture cheaper lenses, and to just “correct” the defects in post-processing.
Are you means : "Sony a7 IV adds a Breathing Compensation mode that crops and resizes the video to cancel-out any change in a lens's angle-of-view (AoV) as it focuses. The mode only works with select Sony lenses (all the GM lenses and some G series glass), as the camera needs a profile of the breathing characteristics. Video is cropped to match and maintain the narrowest AoV that might occur if you focused from minimum focus distance to infinity, meaning there's no distracting change of framing as you refocus.".
 
Last edited:
I started a reply (to one of the previous replies) this morning but discarded it. I’m incredulous that there are people writing off the current state of computational photography as “been done before” or machine learning as “marketing”. There is reasonable debate to be had about the role of computational photography should play in our craft but to write it off completely is akin to sticking your head in the sand while claiming the earth is flat.

When it comes to technology, revolution is just the rapid rise along the bell curve of evolution. None of this is new. But the current level of processor technology, including low cost processors dedicated to neural processing, allow us to use these old ideas in new and interesting ways.

For those genuinely interested in learning more, Aaron Hockley, author of The Computer At My Photos: Artificial Intelligence and the Future of Photography discusses machine learning, AI and computational photography on a recent episode of the PhotoActive Podcast. His book is a free read if you’re a Kindle Unlimited subscriber (at least here in the US). It’s in my queue — I’ll probably start reading it this afternoon.
Interesting - it’s also free in the UK - will take a look
Discussed here https://www.dpreview.com/forums/post/65736008
 
You control nothing, actually.
You depend entirely on a manufacturer's ability to find a trade-off between nice bokeh, speed, size, weight, price, MTF, etc. of a lens.
Computational photography has the potential to make any kind of bokeh: onion, cat eye, Gaussian, star, ...
But you need a huge DoF and a distance map, so smartphones with their tiny matrices and multiple cameras could not be beaten.
It is all physics, as always.
Maybe light field photography (Lytro, K|Lens) will give us a solution.
 
With computational photography you get photos like this one:

f9a0d2629b5a47c5bcfb587dee4c2c8e.jpg

Perfect and ready to be shared on Social Media or with friends.

With my $2000 Fujifilm camera and a $1000 lens the same photo will end up looking like this:

deba2363cc2e4f1985b7a959b2157c51.jpg



8cc9d61a2f8f474a80417f8b476c844b.jpg

Needs to be edited which when you consider a full trip ends up taking away dozens of hours of free time => the reason people don't buy cameras anymore.
 
With computational photography you get photos like this one:

f9a0d2629b5a47c5bcfb587dee4c2c8e.jpg

Perfect and ready to be shared on Social Media or with friends.

With my $2000 Fujifilm camera and a $1000 lens the same photo will end up looking like this:

deba2363cc2e4f1985b7a959b2157c51.jpg

8cc9d61a2f8f474a80417f8b476c844b.jpg

Needs to be edited which when you consider a full trip ends up taking away dozens of hours of free time => the reason people don't buy cameras anymore.
I love this post. Problem is, you didn’t leave enough pixels in the iPhone image. The one that looks like mush when you start digging into it. Sure, great for social media. Yep. Perfect for that. I’ll take my gear over my new iPhone 13Pro Max every day for real imagery. Happy to use my iPhone for documentary purposes. Cheers and happy shooting!
 
I started a reply (to one of the previous replies) this morning but discarded it. I’m incredulous that there are people writing off the current state of computational photography as “been done before” or machine learning as “marketing”. There is reasonable debate to be had about the role of computational photography should play in our craft but to write it off completely is akin to sticking your head in the sand while claiming the earth is flat.

When it comes to technology, revolution is just the rapid rise along the bell curve of evolution. None of this is new. But the current level of processor technology, including low cost processors dedicated to neural processing, allow us to use these old ideas in new and interesting ways.

For those genuinely interested in learning more, Aaron Hockley, author of The Computer At My Photos: Artificial Intelligence and the Future of Photography discusses machine learning, AI and computational photography on a recent episode of the PhotoActive Podcast. His book is a free read if you’re a Kindle Unlimited subscriber (at least here in the US). It’s in my queue — I’ll probably start reading it this afternoon.
It is interesting how concepts fall in and out of favor as the limitations of the methods become apparent - they fall out of favor. As better processing components become available, then it becomes a new revolution. Machine learning is nothing more than building a large database to use to describe the prior distribution and AI is nothing more than what the engineers call a posteriori estimation or the statisticians call Bayesian estimation. Thomas Bayes developed the concept in the 1700's. The approach has produced periods of excitement followed by disappointment, take the 1980's for example. We have Norbert Wiener to think for the rise of "AI' coming out of work he did for the Army during WWII. His book Cybernetics kicked off the AI craze. Johns Hopkins Beast was the predecessor to robotics and was developed by the Johns Hopkins Applied Physics Lab in 1960.


The long hope of AI was by having a large enough database, all problems could be solved and all processes controlled. The problem was it required computational horse power that didn't exist at the time. DARPA's work on speech, keyword spotting and speaker recognition produced what became such things a Siri and Alexia - and talking refrigerators. On the other hand even today they are really not that good.

After the original craze in the 1950's and 1960's, it was found that data driven adaptive estimation and decision outperformed "AI" and after the AI heyday of the 1980's, the concept of AI lost it luster. Of course adaptive signal/data processing was augmented and expended to encompass a model based and the concept of hidden Markov models nicely solve a lot of the problems AI failed miserably to address. Lenny (Leonard) Baum introduced the approach in his seminal paper in the Annals of Mathematical Statistics in 1967. This was really the first major success to the Bayesian approach to statistics.

Up until the 1980's most of the work applying hidden Markov models was classified and hence not clearly understood outside. When its success in speech problems, it became mainstream and enjoyed rapid growth.


Unlike "AI" it also did not require an a priori data bases - just a model. The transition matrix of the Markov process was estimated on the fly from the data.

Today we are having a recurrence of AI - as the processing horsepower has developed to the point that it is commercially feasible. And with such a rebirth - a new hype emerges. Sure there are benefits to photography. Such techniques can make small sensor cameras better. The question becomes how much will they impact large sensor cameras? Most of the best A/F tracking algorithms are classical adaptive data drive approaches.

--

"The winds of heaven is that which blows between a horse's ears," Arabic Proverb
__
Truman
www.pbase.com/tprevatt
 
To me this "computational photography" buzz sounds like a rehash of what has already been done many times over rather than a revolution. What am I missing?
I want Remove Tourists and Remove Powerlines modes 😉
LOL! I would most likely pay extra for that.

How about remove the person standing directly in front of what you want to shoot that is looking at his phone?
 
No need to fight the convenience and quality provided by an iPhone. They take great pics, and they're getting better and more versatile all the time. And, believe it or not, you can make super-good prints from iPhones. I've seen many.

But, what an iPhone cannot do is provide me with a shooting experience. I like manually figuring out exposures, shooting speeds, ISO and more. I love loading pics in Capture One and seeing what I have. Then, I can tweak various settings, recipes or more. It's fun and creative. The iPhone? Pretty automatic, and even my 90-year-old relatives can use them. Just aim and then hit the button.

So, I will use both iPhone and Fuji. But the Fuji is more fun.
 
I have heard that Fuji (might) come out with computational photography along with the X-H2. What exactly does that mean? Is that a hardware thing or software, or both? I honestly haven't really been paying too much attention to this subject, but what does computational photography have to do with Fujifilm.

Are they the only ones that will have that technology? Does it mean that I can do that with my old rinky dinky XT1 or will it involve having to buy the latest and greatest Fujifilm camera? What about other camera brands? Can someone explain this in an easier language to get a better idea?

Thank you :-)
Fuji already has CP when it comes to their in-camera HDR Raw.

CP lives and dies by its ability to stack images while accounting for motion and alignment artifacts, so Fuji had better do really well at that. I would totally be down for a good version of night mode, where noise is reduced through automatic exposure stacking.

The new stacked sensors finally have fast enough readout to be used for good CP and thats what is supposed to come in one of the XH2s.

--
www.darngoodphotos.com
 
Last edited:

Keyboard shortcuts

Back
Top