Eggplantt wrote:
As my interest is occasionally peaked by the prospect of 60mm width line scan cameras for a medium format camera, I thought to return as this seems pretty exciting. Reading the thread again I realise some of my questions were answered before my post.
ProfHankD wrote:
eggplanted wrote:
I've thought about something like this. But the question remains- why not multiple sensors? If they are that low cost, then I see little reason not to explore this.
That's always been on the list... however, it's not as big a win as you'd hope because we are trying to optimize the scan order dynamically, and that isn't as effective with multiple sensors in fixed relative orientation.
I think you'd consider our current stitching scheme an unbearable nightmare because it handles per-pixel-value certainty computations for merging and can dynamically change the order. A spiral scan is one of the simplest reasonable fixed patterns. We had built gigapixel stitching hardware more than a decade ago using a computer-controlled telescope mount -- even that used a modified Hilbert curve scan order.
I'm not familiar with scan ordering techniques so am trying to get up to speed as much as possible on "angle, radius scan control".
Classical scanning was done in a pre-determined, fixed, order. For a line scanner, it's just a simple sweep. For a scanner using a point or rectangular sensor, it's a raster scan much like that used in a TV set -- a control loop like:
for (y=0; y<Y; y=y+1) for (x=0; x<X; x=x+1) sample(x,y);
The good news is that order has sample(5,8) taken just one time unit after sample(4,8) and one before sample(6,8). However, that order also means sample(5,7) and sample(5,9) are temporally separated from sample(5,8) by X units of time. With a potentially changing scene, that means there is a high probability that scene content has changed and you'll get stitch errors. In fact, this same scan pattern is what causes the distortions associated with focal plane shutters and rolling electronic shutter. Thus, when I did my first gigapixel scanners in the 2000s, I used more complex scan orders that typically have much smaller temporal differences between when neighboring locations are sampled. For example, the Hilbert curve:
A Hilbert curve, from Wikipedia CC BY-SA 4.0 by TimSauder
For lafodis160, the actual drive system runs in polar coordinates: you have 360 degrees of rotation and 80mm of radial movement (moving 0 to 80 mm from the center). Despite that, you can still drive it in a raster scan or in a scan like the Hilbert curve above, but the sensor will be changing it's angle throughout the scan -- which makes stitching harder, but helps de-correlate any sensor defects in adjacent sample positions. There are also more efficient scan orders possible using the angle, radius drive, such as spirals and Hilbert-like curves mapped to polar coordinates.
By having multiple sensors in a fixed relative orientation that might rotate in a sweep, it does prevent the dynamic movement this camera offers, but what are trying to do when you are optimizing the scan order dynamically?
Adjacent samples actually overlap a bit to facilitate alignment. Thus, we can tell when the scene content for a previously-sampled portion of the scene has changed in a way that would cause a glitch in the stitching. We also know when neighboring samples were captured, hence can predict which ones are likely to be affected by the scene change. Thus, some already-sampled locations should be resampled... and that's the main reason the algorithm should dynamically change the scan order. However, it also can detect where portions of the scene are essentially content-free -- such as an evenly blue sky -- and could scan those areas faster with lower resolution and quality, just sufficient to confirm that there is "nothing to see here."
I can't quite understand this in terms of the sensor's behaviour at the time of exposure. Simply put, how does it move during exposure?
By "dynamically changing the order" of the scan during exposure based on a certain value, I personally would envision the scan order in a very slow camera like this prioritizing movement in an image-
The priority is internal consistency of scene details in the stitched resulting image.
that is, the sensor moves to scan the next part of the image based on where the most motion blur/movement in the merged image was detected.
We might have it detect motion blur, but that's not a priority. The current goal is simply making sure that adjacent samples will integrate seamlessly.
.Once I understand the need to optimize the scan order dynamically, I'll have an answer to why using multiple sensors isn't effective, because from where I'm coming from even if it's difficult, its going to drastically speed up exposure times.
The easy way to use multiple sensors is to move them together, and the polar coordinate system means that the utility of movements will vary with radius. If multiple sensors are lined-up at different radius offsets, the inner ones will be moving less than the outer ones -- so, we could use superresolution techniques to get higher resolution and improve SNR near the center of the stitched image, but that's not really a huge help. Placing multiple sensors on an arc so that they all have the same radius would work better, but would mean that all sensors would have to "go along for the ride" if any one needed to re-sample a position in the dynamic ordering... so there is a lot more complexity to defining the optimal scan motions.
Also imagine using Lafodis160 as a security camera. You can use a second, low-res, camera to detect scene change and then drive Lafodis160 to sample what's interesting at full resolution... perhaps even tracking subject motion, but with no external camera motion....
In order to call this a success we have to agree that imaging at 1MP scan speed for a 500MP image (at the least) is an acceptable rate, and I struggle to agree, even if your scan method is vastly superior to the linked Pi camera. I'm sure you're right about larger sensors being more difficult, but I'd be curious for an elaboration.
Well, if budget wasn't a big concern, a Sigma fp Lwould make a great scan sensor.
That would reduce the maximum "native" (non-superresolution) resolution to about 1.5GP, but scan speed and IQ would undoubtedly be way higher.
A line scan camera should always have the benefit of speed, and its just a matter of waiting the market out to acquire resolution and sensor size. To be done affordably I envision using multiple of the same model, as single high resolution & large size cameras are not climbing down in price, and top out about 12-16k anyway.
Line scan has good temporal properties, but large-format line scan sensors are not commodity products. It would actually be easier to build a full-size large-format sensor. The LargeSense folks hit a surprisingly good price point, although their cameras have HUGE pixels and hence low resolution... which isn't unreasonable if you think about how images are really used.
GFX100 100MP has 8736 pixels image height. You could get close with 4x2k line scan cameras. The 400MP mode is 17,472 pixels high. You could get close with 4x4k line scan cameras.
No, its not the 500MP-2.6GP that yours offers, but if you wanted to give up some speed you could maybe use them with a scan order aswell.
Well, that's only 44x33mm, which is even easier to do as a simple 3D-printed manual shifter for a FF camera -- which I've built and will soon post too. The upcoming "Budgie" allows up to 48x36mm capture using a FF E (FE) body and a lens that can be adapted to Leica M mount. Most FF lenses don't quite cover 48x36mm, and those that do still tend to have lousy corners, but here's one of my first test shots using Budgie on my A7RII:
Budgie on an A7RII, 3 OOC JPEGs stitched using Hugin (25% to minimize posted file size)
The above JPEG is full resolution, but I compressed it using 25% quality to keep upload bandwidth reasonable... any blocky artifacts in the OOF regions are from that, not from the stitching. Also keep in mind that any IBIS-based multi-shot high-res modes will still work with this, so on an A7RIII or A7RIV....
As far as I'm concerned, this is just another low-hanging fruit on the way to a new generation of scanning technology... sort-of like APSC2 (APS-C Squared) Rotate-and-Stitch Adapter . Hopefully, people will get comfortable making and using these simpler devices and thus become comfortable with things like Lafodis160, which is currently using really cheap parts, but is really at least a full generation ahead of any previous scanning tech... and Lafodis160 is still coming as a fully open source DIY design. I have two undergraduate students working this Summer on making Lafodis160's software better and more user-friendly....