Stitch/merge musings and questions

DavidMillier

Forum Pro
Messages
26,562
Solutions
1
Reaction score
8,377
Location
London, UK
A while back I played around with flat stitching using some old lenses and shift adaptors in an attempt at simulated medium format. With mixed success.

I'm having another go, using spin stitching instead. I'm also combining HDR and pano techniques.

Basically, the pattern is:
  • Shoot 5x2 stop auto bracketing
  • Repoint camera vertically with 50% overlap
  • Shoot another 5x2 stop auto bracket
  • Go to Lightroom, stack the 2 sets of 5 bracket images
  • Use HDR merge to produce 2x HDR DNGs
  • Use Pano merge to stitch the 2 HDR DNGS into a square frame with a 50% increase in pixels.
The result is square 60MP HDR frames simulating a 36x36mm frame.

So, for all you stitching gurus, will this produce more detail, more dynamic range and lower shadow noise (as it theoretically should) compared to a single frame or is there is some practical limitation that will undermine the project?

If it works as advertised, will this actually be better than a square crop from a GFX frame which would be a 33x33mm frame.

Or pie in the sky?

EDIT: This process can also support a basic focus bracket. I tried a few. You can't move the focus point much with only two frame panos but it worked ok when I moved the focus point about a metre closer for the foreground shot.

--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Last edited:
I don't like HDR software so cannot comment on how it may affect image quality. I do focus stack and shift stitch on a very regular basis.

Results will depend on camera sensor and lenses used. I shift/stitch with Canon 17 TS-E on 5DSR for 24mm x 60mm, 36mm x 48mm, and about 44mm x 44mm equivalent sizes with 85 to 100 MP. I also use a Kipon Shift adapter to shift/stitch the Mamiya 645 50mm f4 Shift and Arsat 30mm Fisheye. The largest file here is about 54mm x 72mm at about 225 MP.

The shift/stitching/stacking in these situations provide superb results. I have not compared to GFX. I regularly print at 24" x 36" and In my opinion the file size and quality will stand up for 48" x 72" prints at typical standing distance. Probably even larger. For pure resolution I believe these will easily compete with any GFX sensor and lens. Noise, dynamic range, colour, tonal gradation results may vary.

At 60 MP and 36mm x 36mm compared to GFX 33mm x 33mm you may not see much difference but could definitely be advantageous over your typical 24mm x 24mm 26MP file.

It would be interesting to know your camera and lenses. Seems to me you should be able to shift further and possibly stitch 3 or 4 images instead of just 2. This would start to make an appreciable difference.
 
EDIT: This process can also support a basic focus bracket. I tried a few. You can't move the focus point much with only two frame panos but it worked ok when I moved the focus point about a metre closer for the foreground shot.
I have successfully stitched spun deep depth wide FOV shots by using AF and letting the stitching software do the focus bracketing reconstruction. It needs lots of overlap.
 
My target is not to do vast stitches and produce gigapixel style files, just to see whether it is possible to get GFX-like quality from a smaller format using the simplest techniques.

Stitching is not something I would want to employ routinely but just for the occasional shot I thought deserves it. And I want quick, simple, easy, lazy stitching process. I'm focusing mainly on square shots at the moment and APs-C and full frame aspect ratios lend themselves to 2 shot vertical stitches to achieve sq format with a 50% pixel count increase, which is what has me interested. I also have a flextilt head, which makes for extremely convenient spun stitches. A single friction movement is all that is required, no messing with locking and unlocking clamps.

Likewise with software, I'm looking to stay in the raw environment, can't stand all that exporting nonsense you are required to do with a lot of stitching software. Lightroom makes it easy to do HDR and pano from raw. I'm just unsure about what expectations I should have for the quality of the resulting files.
 
My target is not to do vast stitches and produce gigapixel style files, just to see whether it is possible to get GFX-like quality from a smaller format using the simplest techniques.
I posted this pair of images a while ago but can't find. One is a crop from a GFX 50R, while the other is a stitched and cropped to match image from an X-T2. They use the same lens (SMC Pentax-A 645 45-85/4.5), but the angle of view, focal length and aperture were roughly matched so the images would be comparable. It's not perfect because you can see differences.

These are full resolution. Which one has magical medium format qualities? ;)

Image 1

Image 1



Image 2

Image 2
 
A while back I played around with flat stitching using some old lenses and shift adaptors in an attempt at simulated medium format. With mixed success.

I'm having another go, using spin stitching instead. I'm also combining HDR and pano techniques.

Basically, the pattern is:
  • Shoot 5x2 stop auto bracketing
  • Repoint camera vertically with 50% overlap
  • Shoot another 5x2 stop auto bracket
  • Go to Lightroom, stack the 2 sets of 5 bracket images
  • Use HDR merge to produce 2x HDR DNGs
  • Use Pano merge to stitch the 2 HDR DNGS into a square frame with a 50% increase in pixels.
The result is square 60MP HDR frames simulating a 36x36mm frame.
It's actually not hard to get 36x36mm out of many FF lenses.

A while back, I posted two relevant 3D-printed shift adapters that both provide an M mount going to an E/FE body:
  1. APSC2 (APS-C Squared) Rotate-and-Stitch Adapter uses 4 rotated exposures to give up to about 31.2x31.2mm square stitch from behind the lens. Most FF lenses seem able to support this quite well.
  2. Budgie Sony A7-series Shift-and-Stitch Adapter uses 2-3 shifted shots to capture up to 48x36mm from behind the lens. Relatively few FF lenses can give decent corners this way at infinity, but at closer focus many do ok.
These adapters have a number of compromises to be thin enough to allow infinity focus with these motions on an E/FE body. However, stacking adapters lets them mount nearly any lens. It would be better structurally to make one for a somewhat longer flange distance mount, because that would allow a more substantial 3D-printed structure, perhaps even with gear-driven calibrated movements -- as opposed to the uncalibrated (continuous) movement the current adapters provide.

BTW, if you want to do HDR stitching, that is well supported by Hugin, which is free software that runs on most platforms (I use it under Ubuntu Linux). Note that a lot of software assumes stitches are from moving the lens, which requires calibrated lens corrections to do well -- this doesn't, but some software still tries to do them (it's usually the default), which is wrong. Hugin works fine simply telling it the images come from an uncalibrated fairly long focal length lens.
So, for all you stitching gurus, will this produce more detail, more dynamic range and lower shadow noise (as it theoretically should) compared to a single frame or is there is some practical limitation that will undermine the project?
If the software doesn't mess it up, yes, it can. There are fundamental limits on DR from lens contrast and light leaks, but as long as you don't print the adapter using translucent plastic, you can definitely do better than 15 EV DR. I'd say about 20 EV is an often feasible goal... and of course with no lens nor subject movement between captures.
If it works as advertised, will this actually be better than a square crop from a GFX frame which would be a 33x33mm frame.
Yes; it's definitely possible. However, shift (and rotate) adapters require a tripod, care not to move the lens, and some luck in terms of minimal scene motion. Images done this way with my A7RII and FF lenses are very different from MF lenses on a GFX. I would generally describe them as "more technically flawed, but also more lively" than GFX captures. I think the biggest reason is bokeh -- there aren't a lot of f/1.2 MF lenses.

The key thing is, a new A7RII is only about $1200 (1/5 the price of a GFX100s) and FF lenses are way cheaper than MF, with a much wider selection available. I.e., this stitching is largely the "poor man's MF." Incidentally, it costs well under $1 to 3D print either of the above adapters, and the STLs for both are on Thingiverse.
Or pie in the sky?
This level of stitching is cheap & manual. My pie in the sky is Lafodis160 , which is an automatic large-format scanning camera capable of up to a 160mm diameter image circle with about 2.6GP resolution. The catch is I designed Lafodis to use an ESP32-CAM for the sensor, which means only about 10 EV DR per capture, slow to scan, and offers very limited exposure control... but for a thing that costs about $50 to build, it's fairly impressive.

The original Lafodis was 4x5. The second prototype, Lafodis160, switched to a polar coordinate scan mechanism with 160mm diameter (which both Hugin and OpenCV libraries are still happy to stitch). The 3rd version, which we developed this Summer and are still completing now, simplifies the build while switching to herringbone gear drives for much better positioning accuracy -- Lafodis160 could position the 2MP sensor to fractions of a pixel in radius, but only in 0.18 degree steps rotationally.
EDIT: This process can also support a basic focus bracket. I tried a few. You can't move the focus point much with only two frame panos but it worked ok when I moved the focus point about a metre closer for the foreground shot.
Focus bracketing is problematic because most lenses "breathe" enough to cause minor scaling problems, which are not something most stitching software expects. Basically, you end up turning on the calibrated corrections, so the advantage over stitching that moves the lens largely disappears. That said, focus stacking is very much a thing, so it wouldn't be hard to focus stack captures of the same tile and then stitch the stacked tiles.
 
Hi Prof

Thank you for such a comprehensive reply.

I'm a bit confused about what the Budgie does. Is it basically a standard shift adaptor or is it doing something else?

If it's a shift adaptor, wouldn't that require a medium format lens to get enough coverage? My understanding of both flat stitching and rhinocam vertex style offset rotation stitches is that you need a more coverage than your native sensor format. I'm not sure how you are achieving it with native format lenses. I have a Mir38B 6x6 lens fitted to an Arax shift adaptor and have done some 2 or 3 image flat stitching on full frame. Originally, I thought flat stitching would be a good solution, but I find the shooting process sufficiently cumbersome it's put me off.

I'm now trying out normal spun stitching as an alternative. The FlexTilt head https://www.redsharknews.com/produc...ssories-that-will-make-your-life-a-lot-easier I have is key to the process because there is messing about with clamping and unclamping the head, just a quick and easy friction hinge tilt. It makes 2 frame stitched square frames is a quick and easy process - easy enough even for my awful field craft skills! All that is required is to shoot one frame straight ahead then tilt the FlexTilt upwards by 50% of the frame height and shoot a second frame. Lightroom stitches the frames to create square shots perfectly as long as there is nothing to close to the camera with thin, straight lines.

For added sophistication, I'm experimenting with replacing the 2 frames with 2 sequences of 5 frame HDR blends (also done in LR). The process is take 1 burst of 5 exposure bracketed frames, tilt the head 50%, take a second burst. Import the 10 frames to LR, HDR merge each group of 5 frames to create a pair of HDR DNGs. Then step 2 is to Pano merge the 2 HDR DNGs to create a final 64MP HDR square frame. Everything remains raw (or near raw DNG) and when the final frame is finished, the source frames can be deleted keeping everything neat and tidy.
 
Hi Prof

Thank you for such a comprehensive reply.

I'm a bit confused about what the Budgie does. Is it basically a standard shift adaptor or is it doing something else?
It's a very thin shift adapter.
If it's a shift adaptor, wouldn't that require a medium format lens to get enough coverage?
Ah, you'd think so, right?

The diagonal of a 36x24mm frame is 43.3mm, so you know a lens designed for FF should have decent IQ in a circle of at least that diameter. A square inscribed in that is 30.6mm on a side, so that should work with all FF lenses that don't have extra masks built-in. However, having good IQ 21.6mm off axis typically means coverage is quite a bit more than that, with IQ dropping off as you go farther of axis until it finally vignettes into darkness. Typically, coverage also increases as you focus closer.

Most of my FF lenses are fine covering 36x36mm at infinity. Covering 48x36mm well is asking a tad too much from most FF lenses, although at closer focus quite a few can pull it off. It seems vignetting is more often the limit than aberrations. That's a bit different from -- and happier than -- using C-mount lenses to cover MFT, where IQ, and especially field curvature, usually falls apart long before the image goes dark.

To put it bluntly, the GFX100s isn't really medium format. The 44x33mm sensor is basically what used to be called "multi-aspect" for FF: i.e., any aspect ratio that a FF lens should be able to cover fits as a crop of the 44x33mm sensor. I don't think that's coincidence, but it markets better as "medium format." ;-) Anyway, the lenses designed for the GFX cameras cover at least the full 55mm diameter at infinity with good IQ, but lots of FF lenses can pull that off too, especially at closer focus distances, and you get the aspect-ratio choice from all. In sum, if I owned a GFX100s, I'd use it mostly with FF lenses that I already own.
My understanding of both flat stitching and rhinocam vertex style offset rotation stitches is that you need a more coverage than your native sensor format.
As I said, most designed-for-FF lenses cover significantly greater than FF and the GFX sensors are just big enough to capture every possible aspect ratio that a FF lens MUST be able to cover. Rhinocam is too thick to use FF lenses, whereas my adapters are thin enough to even use M-mount rangefinder lenses, although I'd expect better coverage performance from FF SLR lenses than from rangefinder lenses.
I'm not sure how you are achieving it with native format lenses. I have a Mir38B 6x6 lens fitted to an Arax shift adaptor and have done some 2 or 3 image flat stitching on full frame. Originally, I thought flat stitching would be a good solution, but I find the shooting process sufficiently cumbersome it's put me off.
Yeah, it's off-putting. At least with Budgie it is fast, but you still need a tripod and a motionless scene. Rotating, even with APSC2, is more annoying -- largely because the strap on my camera gets in the way.

BTW, in 1999 I was doing 360 degree stitching from a pair of Nikon950 with fisheye lenses mounted on an autonomous vehicle (stitching and displaying on a video wall in real time using a cluster supercomputer). Soon after, my prime digital stitching platform was a webcam or other camera on a computer-controlled telescope; it took a while, but got up to about 1GP images using a 640x480 webcam. My flat-scanning capture experiences started about 8 years ago, with a 4x5 press/view camera with my NEX-5 on its back. In sum, I've actually bukld dozens of stitched capture platforms. Budgie is stunningly painless in comparison to nearly all of them. ;-)
I'm now trying out normal spun stitching as an alternative. The FlexTilt head https://www.redsharknews.com/produc...ssories-that-will-make-your-life-a-lot-easier I have is key to the process because there is messing about with clamping and unclamping the head, just a quick and easy friction hinge tilt. It makes 2 frame stitched square frames is a quick and easy process - easy enough even for my awful field craft skills! All that is required is to shoot one frame straight ahead then tilt the FlexTilt upwards by 50% of the frame height and shoot a second frame. Lightroom stitches the frames to create square shots perfectly as long as there is nothing to close to the camera with thin, straight lines.
You might get slightly better resolution that way, but precision of rotating about the correct point and lens distortion correction both seriously impact quality of stitches. Certainly, both APSC2 and Budgie are at least as quick to use as that.
For added sophistication, I'm experimenting with replacing the 2 frames with 2 sequences of 5 frame HDR blends (also done in LR). The process is take 1 burst of 5 exposure bracketed frames, tilt the head 50%, take a second burst. Import the 10 frames to LR, HDR merge each group of 5 frames to create a pair of HDR DNGs. Then step 2 is to Pano merge the 2 HDR DNGs to create a final 64MP HDR square frame. Everything remains raw (or near raw DNG) and when the final frame is finished, the source frames can be deleted keeping everything neat and tidy.
* Sigh. * Again, Hugin is free software that can directly make HDR stitches. I don't know why people insist on paying money for commercial software that isn't as good at what they are trying to do.
 
Last edited:
Hi Prof

Thank you for such a comprehensive reply.

I'm a bit confused about what the Budgie does. Is it basically a standard shift adaptor or is it doing something else?
It's a very thin shift adapter.
If it's a shift adaptor, wouldn't that require a medium format lens to get enough coverage?
Ah, you'd think so, right?

The diagonal of a 36x24mm frame is 43.3mm, so you know a lens designed for FF should have decent IQ in a circle of at least that diameter. A square inscribed in that is 30.6mm on a side, so that should work with all FF lenses that don't have extra masks built-in. However, having good IQ 21.6mm off axis typically means coverage is quite a bit more than that, with IQ dropping off as you go farther of axis until it finally vignettes into darkness. Typically, coverage also increases as you focus closer.

Most of my FF lenses are fine covering 36x36mm at infinity. Covering 48x36mm well is asking a tad too much from most FF lenses, although at closer focus quite a few can pull it off. It seems vignetting is more often the limit than aberrations. That's a bit different from -- and happier than -- using C-mount lenses to cover MFT, where IQ, and especially field curvature, usually falls apart long before the image goes dark.

To put it bluntly, the GFX100s isn't really medium format. The 44x33mm sensor is basically what used to be called "multi-aspect" for FF: i.e., any aspect ratio that a FF lens should be able to cover fits as a crop of the 44x33mm sensor. I don't think that's coincidence, but it markets better as "medium format." ;-) Anyway, the lenses designed for the GFX cameras cover at least the full 55mm diameter at infinity with good IQ, but lots of FF lenses can pull that off too, especially at closer focus distances, and you get the aspect-ratio choice from all. In sum, if I owned a GFX100s, I'd use it mostly with FF lenses that I already own.
My understanding of both flat stitching and rhinocam vertex style offset rotation stitches is that you need a more coverage than your native sensor format.
As I said, most designed-for-FF lenses cover significantly greater than FF and the GFX sensors are just big enough to capture every possible aspect ratio that a FF lens MUST be able to cover. Rhinocam is too thick to use FF lenses, whereas my adapters are thin enough to even use M-mount rangefinder lenses, although I'd expect better coverage performance from FF SLR lenses than from rangefinder lenses.
I'm not sure how you are achieving it with native format lenses. I have a Mir38B 6x6 lens fitted to an Arax shift adaptor and have done some 2 or 3 image flat stitching on full frame. Originally, I thought flat stitching would be a good solution, but I find the shooting process sufficiently cumbersome it's put me off.
Yeah, it's off-putting. At least with Budgie it is fast, but you still need a tripod and a motionless scene. Rotating, even with APSC2, is more annoying -- largely because the strap on my camera gets in the way.

BTW, in 1999 I was doing 360 degree stitching from a pair of Nikon950 with fisheye lenses mounted on an autonomous vehicle (stitching and displaying on a video wall in real time using a cluster supercomputer). Soon after, my prime digital stitching platform was a webcam or other camera on a computer-controlled telescope; it took a while, but got up to about 1GP images using a 640x480 webcam. My flat-scanning capture experiences started about 8 years ago, with a 4x5 press/view camera with my NEX-5 on its back. In sum, I've actually bukld dozens of stitched capture platforms. Budgie is stunningly painless in comparison to nearly all of them. ;-)
I'm now trying out normal spun stitching as an alternative. The FlexTilt head https://www.redsharknews.com/produc...ssories-that-will-make-your-life-a-lot-easier I have is key to the process because there is messing about with clamping and unclamping the head, just a quick and easy friction hinge tilt. It makes 2 frame stitched square frames is a quick and easy process - easy enough even for my awful field craft skills! All that is required is to shoot one frame straight ahead then tilt the FlexTilt upwards by 50% of the frame height and shoot a second frame. Lightroom stitches the frames to create square shots perfectly as long as there is nothing to close to the camera with thin, straight lines.
You might get slightly better resolution that way, but precision of rotating about the correct point and lens distortion correction both seriously impact quality of stitches. Certainly, both APSC2 and Budgie are at least as quick to use as that.
For added sophistication, I'm experimenting with replacing the 2 frames with 2 sequences of 5 frame HDR blends (also done in LR). The process is take 1 burst of 5 exposure bracketed frames, tilt the head 50%, take a second burst. Import the 10 frames to LR, HDR merge each group of 5 frames to create a pair of HDR DNGs. Then step 2 is to Pano merge the 2 HDR DNGs to create a final 64MP HDR square frame. Everything remains raw (or near raw DNG) and when the final frame is finished, the source frames can be deleted keeping everything neat and tidy.
* Sigh. * Again, Hugin is free software that can directly make HDR stitches. I don't know why people insist on paying money for commercial software that isn't as good at what they are trying to do.
Good point and I have Hugin and PTGui and the Microsoft ICE one. The advantage of the LR solution is workflow. Everything stays in raw or dng and everything can be done within the one program. And once stitched, all the source files can be deleted, so the file system stays neat and tidy. I say this, but I have completed exactly one image using this method! It's early days. I'm currently processing 6 more as I type these words.

The slightly irony in all of this, is that I no longer want to use windows and LR as my main platform. I've transitioned to Ubuntu/darktable for all photos shot during the last 2 years. I keep LR because of the 40000 images I have already in the catalogue. Hence, I have a dual boot PC and maintain two environments. My LR is the last perptual licence version so there are no running costs.

In an ideal world, I would not be using LR for these stitches. But I don't how to recreate the workflow in Linux without having to export duplicate files and step outside darktable. I don't want to go back to my old workflow using multiple programs.

Keep it raw,keep all my workflow from import to printing in the one program has been my motto for a while (except printing from Linux isn;t as good as from windows, another puzzle). If there was a way to do the HDR stacking and pano stitching with darktable, I would.

I've seen hints of clever stuff that can be done with the optional lua scripts that can be added to darktable but I can't understand how to set it up and the instructions don't make sense to me.

Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.

--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Last edited:
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:


Jim

--
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
I don't know anything about maths software although I have an old package in a cupboard (I think it may have been given to me by an ex engineering student friend). I wouldn't know where to start, TBH.

I presume in your little program, the magic happens in line #4? What does that uint function do?





--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
I don't know anything about maths software although I have an old package in a cupboard (I think it may have been given to me by an ex engineering student friend). I wouldn't know where to start, TBH.

I presume in your little program, the magic happens in line #4?
Yes.
What does that uint function do?
uint8 converts to 8-bit unsigned integer representation. double converts to 64-bit floating point. The .^ (exponentiation) operations are to convert from gamma corrected (assuming gamma of 2.2, like Adobe RGB) to linear before averaging, and back after averaging.

--
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
Hi, Jim. :-)

So, is averaging in the gamma 2.2 domain really right? I would argue that true linear values from multiple supposedly-identical exposures should be averaged linearly -- that's essentially what a longer exposure with a deeper charge bucket would do. Admittedly, it shouldn't make much difference here, and smarter "averaging" would do some filtering of outliers, etc.

The free & easy one-liner I'd suggest for image averaging uses ImageMagick:

convert -average input_files output_file

As for doing it in camera (directly accessing the raw buffer), that's something I've done many times using CHDK, and it is viable using Magic Lantern or OpenMemories too, but none of those is a canned solution and the newest high-end camera supported by any of those is the A7RII (supported by OpenMemories).
 
Last edited:
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
I don't know anything about maths software although I have an old package in a cupboard (I think it may have been given to me by an ex engineering student friend). I wouldn't know where to start, TBH.

I presume in your little program, the magic happens in line #4?
Yes.
What does that uint function do?
uint8 converts to 8-bit unsigned integer representation. double converts to 64-bit floating point. The .^ (exponentiation) operations are to convert from gamma corrected (assuming gamma of 2.2, like Adobe RGB) to linear before averaging, and back after averaging.
I suppose I could have a go. What can go wrong. Found that package I have - it's matlab v5 for win95 :-)



--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
Hi, Jim. :-)

So, is averaging in the gamma 2.2 domain really right? I would argue that true linear values from multiple supposedly-identical exposures should be averaged linearly -- that's essentially what a longer exposure with a deeper charge bucket would do. Admittedly, it shouldn't make much difference here, and smarter "averaging" would do some filtering of outliers, etc.

The free & easy one-liner I'd suggest for image averaging uses ImageMagick:

convert -average input_files output_file

As for doing it in camera (directly accessing the raw buffer), that's something I've done many times using CHDK, and it is viable using Magic Lantern or OpenMemories too, but none of those is a canned solution and the newest high-end camera supported by any of those is the A7RII (supported by OpenMemories).
Something else to research :-). It's a real rabbit hole - the moment you start looking into something new, there are 10 new dependencies you also have to research.

--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
Hi, Jim. :-)

So, is averaging in the gamma 2.2 domain really right?
The program converts what is assumed to be a gamma 2.2 image into linear form before averaging, and reapplies the tone curve at the end.

A gamma 2.2 image is actually encoded with a gamma of 0.45. The convention is that a gamma 2.2 image is encoded for a monitor with a gamma of 2.2, not that it is encoded with an exponential of 2.2.
I would argue that true linear values from multiple supposedly-identical exposures should be averaged linearly
I agree.
-- that's essentially what a longer exposure with a deeper charge bucket would do. Admittedly, it shouldn't make much difference here, and smarter "averaging" would do some filtering of outliers, etc.

The free & easy one-liner I'd suggest for image averaging uses ImageMagick:

convert -average input_files output_file

As for doing it in camera (directly accessing the raw buffer), that's something I've done many times using CHDK, and it is viable using Magic Lantern or OpenMemories too, but none of those is a canned solution and the newest high-end camera supported by any of those is the A7RII (supported by OpenMemories).


--
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
Hi, Jim. :-)

So, is averaging in the gamma 2.2 domain really right?
The program converts what is assumed to be a gamma 2.2 image into linear form before averaging, and reapplies the tone curve at the end.

A gamma 2.2 image is actually encoded with a gamma of 0.45. The convention is that a gamma 2.2 image is encoded for a monitor with a gamma of 2.2, not that it is encoded with an exponential of 2.2.
So, not a fully legit "uncompressed" TIF as input? Uncompressed TIFs are supposed to be linear gamma as I understand them.... What software makes TIFs with 0.45 gamma?
I would argue that true linear values from multiple supposedly-identical exposures should be averaged linearly
I agree.
Glad to hear that. :-)
 
Edit: another thing I'd love to be able to do from within darktable is frame averaging long exposure photo stacking as an alternative to doing single very long exposures. I don't know how to do that without photoshop (which I don't have). Actually, what I would really like was for my camera to do the frame averaging in-camera Phase One style and just give me the single merged raw.
Download Octave (free version of Matlab). Run a script something like this:

ce2c1af3eb4d479592663889db80f77e.jpg.png


Modify for use with more than 2 images.

I haven't tested the above code. The code I use for averaging is more complicated.

Also, see this:

https://blog.kasson.com/the-last-word/an-mf-camera-in-your-jacket-pocket/

Jim
Hi, Jim. :-)

So, is averaging in the gamma 2.2 domain really right?
The program converts what is assumed to be a gamma 2.2 image into linear form before averaging, and reapplies the tone curve at the end.

A gamma 2.2 image is actually encoded with a gamma of 0.45. The convention is that a gamma 2.2 image is encoded for a monitor with a gamma of 2.2, not that it is encoded with an exponential of 2.2.
So, not a fully legit "uncompressed" TIF as input?
In Ps and Lr, TIFF compression does not refer to the tone curve, but to file size compression algorithms.
Uncompressed TIFs are supposed to be linear gamma as I understand them....
Not those written by Ps and Lr.
What software makes TIFs with 0.45 gamma?
The code above was intended to be used files written from Lr in Adobe RGB. Lr encodes Adobe RGB files with an exponential of 0.45. Normalizing to unity and raising those images to a power of 2.2 makes them linear.

See this:


Precise exponents are 2.19921875 and 0.454706927175844;

Jim

--
https://blog.kasson.com
 
Last edited:
Hi Prof

Thank you for such a comprehensive reply.

I'm a bit confused about what the Budgie does. Is it basically a standard shift adaptor or is it doing something else?
It's a very thin shift adapter.
If it's a shift adaptor, wouldn't that require a medium format lens to get enough coverage?
Ah, you'd think so, right?

The diagonal of a 36x24mm frame is 43.3mm, so you know a lens designed for FF should have decent IQ in a circle of at least that diameter. A square inscribed in that is 30.6mm on a side, so that should work with all FF lenses that don't have extra masks built-in. However, having good IQ 21.6mm off axis typically means coverage is quite a bit more than that, with IQ dropping off as you go farther of axis until it finally vignettes into darkness. Typically, coverage also ncreases as you focus closer.
I rely on FF lenses that project oversize image circles to provide in-camera movements when they're used with my FF "FrankenKamera," which has a rear rise / fall mechanism built-in and basically makes every lens mounted on it a shift lens.

Over the years, I've have found there is a strong (but not absolute!) correlation between the shape of the MTF curve of a lens and the size of the image circle it projects (which is basically what ProfHankD is suggesting in his comment.)

This correlation can be helpful when looking for lenses that project oversize image circles because lenses that maintain a high MTF value and/or a relatively flat curve out to 20 mm or so will often (but not always!) perform fairly well for several more millimeters beyond the 21.6 mm cut-off line.

Which means one doesn't necessarily need to test every lens to identify those projecting oversize image circles, but merely examine their MTF curves and see how well they hold up at the far side.

(As a related aside, I also suspect some manufacturers -- especially Sigma, with their Art series lenses -- intentionally design their lenses to project oversize image circles in order to increase the size of their center "sweet spot.")
Most of my FF lenses are fine covering 36x36mm at infinity. Covering 48x36mm well is asking a tad too much from most FF lenses, although at closer focus quite a few can pull it off. It seems vignetting is more often the limit than aberrations. That's a bit different from -- and happier than -- using C-mount lenses to cover MFT, where IQ, and especially field curvature, usually falls apart long before the image goes dark.
Also be aware that, owing to the nature of their design, zoom lenses, in particular, often project surprisingly large image circles across the middle of their ranges. In fact, I have a couple that will provide for as much as +/- 12 mm at the midpoints of their ranges, with a few additional millimeters available beyond that if one can accept a reduction in IQ (as is often possible with my nighttime photography, where heavy vignetting and a significant drop in IQ frequently disappear into a jet black sky or inky black shadow areas near the farthest corners.)
 
The proof of the pudding is in the eating as they say.

This image is an attempt to simulate a 36x36mm sq 60MP medium format image. It was made from 10 frames combined to produce a HDR pano in LR, saved as a dng. The dng was imported into ubuntu/darktable, edited and output as a 85% jpeg.

View attachment 340911358e8a452e8af177c7d7a2f098.jpg

Not the greatest weather for doing testing (threatening rain, overcast, dull). Colour is a bit odd as well on my screen. But the question is in terms of resolution, dynamic range, shadow noise etc, is this remotely medium format like or is this all a lot of effort for no gain?



--
DPReview gallery: https://www.dpreview.com/galleries/0286305481
Website: http://www.whisperingcat.co.uk/ (2018 - website revived!)
Flickr: http://www.flickr.com/photos/davidmillier/ (very old!)
 
Last edited:

Keyboard shortcuts

Back
Top