Space constraints in the thin bodies of modern smartphones mean camera engineers are limited in terms of the size of image sensors they can use in their designs. Manufacturers have therefore been pushing computational imaging methods to improve the quality of their devices' image output.
Google's Super Resolution algorithm is one such method. It involves shooting a burst of raw photos every time the shutter is pressed and takes advantage of the user's natural hand-shake, even if it is ever so slight. The pixel-level differences between each of the frames in the burst can be used to merge several images of the burst into an output file with optimized detail at each pixel location.
An illustration that shows how multiple frames are aligned to create the final image.
A visual representation of the steps used to create the final image from a burst of Raw input images.
Now Google has published the above video that provides a great overview of the technology in just over three minutes.
'This approach, which includes no explicit demosaicing step, serves to both increase image resolution and boost signal to noise ratio,' write the Google researchers in the paper the video is based on. 'Our algorithm is robust to challenging scene conditions: local motion, occlusion, or scene changes. It runs at 100 milliseconds per 12-megapixel RAW input burst frame on mass-produced mobile phones.'
I think people can increase the resolution and contrast of their images more by using a cleaning cloth on the lens from time to time. The term super resolution is good for marketing but diffraction blurs each point of the optical signal to a disk of a certain diameter. Common smartphones where one would use this technology have about these specs: f/1.8 optics 12MP on a 1/2.55'' sensor 1.4μm pixel size This leads to diameter of diffraction disk = 2 * 600nm* f-number = 2.16μm The diffraction disk is already bigger than a pixel.
Why not just use an 20MP sensor and be done with the "superresolution" talk? The diffraction would take care of the moire artefacts naturally.
The noise reduction part of the image stacking however seems to be very useful!
I think this goes hand-in-hand. If a picture cleared from noise (and also demosaicking artefacts), with the help of the super resolution workflow, then the ground is laid for reducing diffraction blur computationally.
Pentax does that for its super resolution (4 pictures, offset by full pixel with). This reduces noise and demosaicking, and after that some special in-camera sharpening can kick in to increase sharpness, i.e. create higher resolution illusion.
Where pentax "super resolution" sharpening is a bit of a cheat though, I think. They don't do full deconvolution math (a camera CPU is way too slow for that), but create some artificial aliasing or so.
Pixel shift / super resolution is a fool feature, on any camera, including google phones and this for two reasons: 1) it require absolutely still scene including everything that is in the scene. Yes the algorithm will remove artifacts (and higher res.) on areas that moved, but then is some part of the image are high res. and other parts low res. what's the point of high res?
2) all lenses have they MFT (resolution) drop from the center toward the edges and super resolution can't improve resolution beyond what the lens can provide. Usually, edges aren't pixel sharp for single frames, especially with small sensors with high pixel density such as the ones used on phones, pixel shift/super res. won't change that.
Yes, the algorithm does motion compensation be using single frame pixels where motion happened, in that case the super resolution isn't achieved and the resolution is resolution of a single frame. So in case of subject motion, some parts of the image are slightly better than others. Also, when the lens is stopped down, the lens aperture is stopped down, which reduced the resolution before the light fall on the sensor, and in that case it doesn't matter how many frame are taken, the resolution is limited by lens diffraction. You need to understand those phenomenons in order not to get sold on something that actually only works in very rare conditions.
Google super resolution can indeed get more resolution out of their specially design lens, so of course they already consider diffraction and other imperfection of the optics. by the way, Google s stacking algorithm are much advanced than you think as they could detect motion and remove accordingly so they don't waste too many photos.
No one can get around diffraction, google included. No matter how many frames are taken and how good is the algorithm, it is not possible to resolve more than what diffraction allows. Yes, they detect motion between frames , so for the areas where motion happens the resolution drops. Understood?
For a "normal" smart-phone, diffraction is greater than pixel size, 1.5-2 times. The "shift"-ing of sensor with 0.5 pixel size will keep in the same diffraction cercle. Sure, some differences will be captured - aka "more" data, but the real resolution is still 2-4 times less (aka a pixel is half diffraction cercle, 0.5x0.5=0.25).
If these new technologies will apply to larger sensor size, MFT, APS-C, FF, MS... then a really better image aquired. Last solution - move the computing part of a smart-phone to a real camera, be it even a "smaller" one.
This remember me of Samsung NX line...
And then, who need 20+Mpix, when you "see" it on a TV, the FHD has only 2Mpix, 4K TV has 8Mpix.
For really professionals, they know what hardware they need...
you can have larger aperture to get over small pixel diffraction and software can also compensate quite a bit. IMX586 with its 0.8 pixel size works okish with F1.6.
sounds very much like an adapted version of the good old dithering + stacking methods used in astrophotography. very effective to boost the (good) signal to noise ratio
Old school photographers and hipsters will run and hide in their darkroom if they hear the word 'computational'. This is too much science and technology for them. :)
These techniques are used by photographers all the time to lower noise and remove moving objects like people and cars from photos.. I have used them in photoshop myself on numerous occasions.
Because there's literally no science or technology involved in converting photons to a printed image using film, chemistry and paper is there, it's just "magic".....
You obviously never bought any of the plethora of cheap 35mm cameras in the 1970s & 80s with plastic lenses, a free film that you got back in the post with your cheap film developing - some of the results were downright appalling - something pretty much any smartphone in the last 5 years can beat...
Installed a clone of Google Camera into my Pocophone f1. This clone is made in March 2019 and based on not the latest yet fresh Google Camera code. Details, color, WB difference between original (Xiaomi) and Google's cameras is huge. Now, with "google camera" on my pocophone, I do much more photos on my smartphone. And there are enough details to do photo editing now (if it is needed).
3 years ago, I did an experiment with my Sony a7rii. I took five landscape photos handheld 35mm at f8 during the day on the RAW setting. I exported them to tiff files with DxO optics pro. Then I used another software to upsample the resolution from 42mp to 84mp. Then I aligned and stacked the photos into one 84mp photo. At 100%, the 84mp photo actually showed more detail than any of the 42mp photos. I also downsampled it back to 42mp and it showed more detail than the single photos at 100%.
It works and this idea has been used in a similar but much more complex way for a long time in astrophotography.
Fortunately, for me, there isn't a need for 84mp photos...although, I wonder how a crop of a 84mp photo would compare to one single 42mp photo of the same subject at a closer distance. or better yet, if I did this test with my Sony APSC A5100 (a 24mp camera), how would the processed, upsampled and stacked photo compare with one shot with my a7rii (using the same lens FF lens)?
Unless it's for scientific purposes where the extra detail is needed, the only thing that matters in the end is whether or not you captured a compelling image.
The other benefit (apart from the questionable resolution increase) could be the noise reduction effect of merging multiple pictures. Also elimination / reduction of moire. So the general idea of merging multiple hand-held shots is still interesting.
Even if we regard, that under museum/galery conditions (viewing distances), the human eye only utilizes 6 megapixels. Where the super resolution is long gone, but noise or moire would still be visible.
This works as you mentioned on landscapes and still scenes. Most of the time you want to take a picture of some action, sports,etc., I do not belive this technic gives us a good result.
Yes, correct, but for action & sports neither resolution nor noise are of slighest interest. Because these pictures ever see only small media and with low resolution. And an audience with no pixel peeping intents whatsoever.
So for sports and action, even a Nikon D3 with its 12 megapixels is probably an unnecessary resolution luxury, never exploited by any media or audience.
Given that, nobody should bother about resolution increasing techniques anyway there, or he would be insane ;)
I have also stacked 10 photos of a night landscape using a small tripod...no upsampling, just convert, align and stack...the result was a clean image with significant noise reduction. I agree that all this was educational, but it is limited unless for science or you plan to do a mural. It would be still nice to know if this technique would be good for a small and cheap apsc camera (like my sony a5100). I will perhaps do those tests this summer. It could mean great 24mp images...maybe even upsampled to 42mp...whether or not it matches an image taken with the a7rii (10x the price, 2x the weight/size)...we shall see. My a7rii can be a pain in butt...a sony a5100 with a small leica lens makes it ideal for long hikes/climbs and even street photography. But the images can sometimes be smartphone like depending on lighting. I wonder if images can be stacked to increase resolution and/or reduce noise, or even stiched together to make larger mp photos...which can be downsampled.
you are working for a covert govt organization and need to take photos and video from 100km away and/or 10km in the sky...in the dark. IBIS, AF, upsampling, NR, extreme optical quality, algos to reduce atmospheric haze and extract details are paramount.
I got the new Pixel 3. Daylight photos, while ok for cellphone viewing, show unacceptable mushiness and artifacts viewed even just on an iPad screen, so color me unimpressed. Google should spend more effort on just regular daylight imaging first.
@DarnGoodPhotos, well I though the point of computational photography by Google was to do all that better. Me having to edit a single raw from the series rather defeats the entire purpose, doesn't it? Last years crop of iOS devices, unlike the new Pixels (which I use), give pictures that don't fall apart at iPad size, which for me reflects really badly on Google's algorithms. If I don't have a camera with me, I will choose to take a photo with my iPad before the Pixel, because I can actually view/publish the iPad photo at larger size. Pixel pics are only for cellphone viewing.
No, the point of computational photography is to automate the exposure stacking. C-Raw is just before NR is applied; so you get all the benefits of stacking and other C-P tricks combined with the ability to edit much further than JPGs.
If a camera can shoot couple photos per second, this can be done with any of those. Such cameras have existed at least 10-15 years at consumer level. The point is that now it is offered as fully automated process in some cameras. If you lust fine details, maybe seek something else than a phone camera or compact camera.
So, you think that analogue images (i.e, film) were not processed? They certainly were, just in a different way. There's nothing about film that's not processed, from the exposure, burning/dodging, printing on certain papers, digital scanning, etc., etc.
Well you are right on the overprocessing. But one thing this technology can achieve is reduce moire and false color artefacts which came with the digital sensors. This brings us closer to the quality of analog film which didn't have these issues.
So it seems people are even defending the heavily processed images :(
The arguments do not however seem to address the HEAVILY processed, but never mind, you are winning, the time when camera manufacturers tried to make images to look natural is ending.
It’s interesting, I’m having similar thoughts for a while, but I also think it goes in both ways. For one I see sensors like Fuji’s X-Trans, which to me produces some kind or artificial looking images, others say to them they look more "analogue". Colors are another important part, of how we 'feel' or interpret images.
For my tastes Panasonic and Nikon are a bit over the top regarding their color science (cameras like the Canon 5D II just seemed more natural color-wise to me), others love the more clean or 'seperated' look of the newer mirrorless system cameras. Undoubtly the video qualities of a camera like the GH5 are really loved by many, altough colors are probably more pleasing, than they are real.
I think film has certain qualities (e. g. highlight roll-off etc.) that are mostly missing from digitally captured images (except sharpness which really is unrivalled, with good enough glass and sensor).
Sony have been doing this for years. It appeared in the Nex cameras when they were first introduced. They have a mode that takes multi images and recompiles them in the same way as the Google program. At the time it was first released I argued that Sony should have made far more noise about it. Canon, a little later came out with a version in some of their compact cameras, although it didn't work as well as the Sony version. The Sony version is restricted to jpeg.
I don’t think they use neural networks like google does. They use more traditional process that is less advanced, and some that are more specialized (eg. multi shot NR)
He meant a compute capability that can be found in smartphones. The processors in cameras are ASICs which are very efficient and cheap but lacks flexibility of the Silicon on Chip (SoCs) in smartphones or PCs.
And why would you need that kind of flexibility? Most of this should be completed on a generic DSP like Qualcomm's and especially so if you design a more specific-use DSP.
"difficult, as the RX100 does not have a mini-computer attached to it ;) "
Its easy enough to put powerful hardware in the RX100, its not like they aren't charging enough for #5, but I think the bigger issue is that the software is written for Android so Sony would need change the RX100's OS.
DJI does a much better version of upressing it on their Mavic 2 Zoom, creating Good 48MP photos from their 12MP camera, by moving the camera around, and then stitching it, into a much higher resolution photo.
Interesting. I saw this video yesterday. Traditional camera manufacturers should take notice. See, if they can implement this, they can do without the expensive IBIS. THe IBIS is certainly superior in stabalization but this technique has the benefit of increased resolution and reduction of moire. if implemented, a handheld shot will be sharper than a camera on a tripod.
If you get Canon/Nikon/Sony to marry these bleeding edge algorithms with the superior IBIS / lens IS technologies, results would be staggering. I'm talking indoor 1/800s f/8 ISO 400 handlheld kind of staggering.
Try reading up on current full frame lighting optimization, lens correction and de-noising algorithms, and you'll quickly realize that big camera brands are 10-15 years behind what Google is doing here.
@Michael B 66 Still, better image stacking algorithms helps a lot especially when you have a good sensor and optics. It's not just a marketing. I hope camera manufacturers will learn from Google and put more powerful image processing chip and use more advanced processing like what can be found on Google/Apple's phone camera. I mean heck, it might be even possible for a 1" compact camera to match one with APS-C sensor with algorithms like this, and the readout speed shouldn't be a problem with a stacked DRAM integrated sensor such as the one on RX100 M5A.
Sadly Thoughts R US is right - DSLR has moved onto mirrorless, but the market & innovation is getting smaller & smaller; whereas the mass market smartphone "computational photography" is making a lot of progress. Unless the big manufacturers make some big changes, then 5 years from now I reckon most will be gone or merged/swallowed up.
Looks good. 1/10s per raw 12MP frame is pretty fast on a cell phone processor... but it must be a pretty fast cell phone processor or have some hardware assist. The ARM cores used inside some Canons can't even read 12M 12-bit pixel values in 1/10s! However, if you can grab a raw burst, you could do this in post....
Yes, one problem that traditional camera manufacturers have is that their processors really are not nearly as powerful as those used in smartphones, and thus will not be able to do the same level of computations.
The M50 is way faster in converting a raw image into a jpeg compared to my PC (series 3 core i7 quad processor) so they must have put some routines in silicon. I am shure that the camera modules of modern smart phones have that too.
Maybe camera manufacturers put some effort into lens design - why do a lot of computational manipulation (and there is still some amount) into the postprocessing?
As physicist I prefer always good data and the full amount of it - in camera speak it leads to using raw exclusively.
All the Canon cameras I've crawled inside (about 20 models so far via CHDK or ML) have JPEG 1996 hardware inside, and probably every dedicated camera does... which is why pretty much no cameras implement JPEG 2000. Cell phones tend to have GPUs, so they can accelerate using that more general-purpose hardware. I think most camera companies still haven't accepted that digital cameras are computers, so they just do incremental enhancements of now-ancient computing resources. A few approached it the other way around, but Sony is really the only survivor from that set (e.g., Casio and Samsung also went heavier on the compute resources, but neither makes high-end cameras now). I don't know if Sony has a GPU in their cameras or if they just use ARM NEON and some dedicated function units.
The main problem is not the processor. The sensor is good enough without combining exposures. The problem is that dedicated cameras have bad algorithms (white balance, exposure, tone mapping).
Is there a software (or phone app) that can do this for short video files? If it can be done with a phone I imagine one can do it with 4k (or better) video files as well. It will also be manufacturer independent.
And I just bought a new iPhone. WHY AM I SUCH AN IDIOT!!!!! Take me home Jesus, I’m too stupid to live in this world, but buying the wrong phone probably means I’m too dumb for Heaven. On the other hand I’ll probably be with more of my friends in Hell.
You're still getting a better pair of cameras than Pixel's, all things(video recording, the presence of telephoto lens, excellent compatibility with LR Mobile camera etc) considered.
Religion (and regret) are much older than 2000 years. I was using Jesus as a stand-in for all the great religious figures past, present, and future. You can't be too inclusive nowadays.
@SmilerGrogan: At least you have the certainty, that almost all apps worth using and almost all aproaches to computational photography are available on you Phone. In good quality too, and often with an alternative or two available.
I still use my iPhone 8 plus as my walkabout camera - and my roughly five years old iPhone 6 is my spare Phone/storage/pro digital audio recorder (up to 96 kHz 24bit broadcast quality WAV uncompressed Stereo).
Both were recently updated to the latest iOS version. None of my Androids the last four years received more than 2-3 updates in all and only in year one.
I like things that are working. Easy data exchange across phones, tablets and macs. In a way the whole Apple system can be seen as complete system of cooperating devices.
When my Windows 10 Pro gear is involved (Microsoft Surface Pro) or worse - Android - it becomes a complicated “fiddle” to get things to work together in a somewhat acceptable way - even with lots of effort and cursing ;-)
Nonetheless in good light conditions the zoom results of the Pixel 3 are not as good as the results of Apple's 52mm camera. The resolution advantage is very small, I tested it in a store and it was definitely not lossless at 1.5x zoom. I would say that it only gives you approximately 1.1x lossless zoom (like the difference between 16 and 12 megapixels).
"Lossless" 2x zoom from Pixel 3 XL... I would say it only looks slightly better than a conventional digital zoom but it's nowhere near as good as the image produced by a dedicated telephoto lens. If I had Pixel 3 and wanted to use the zoom function on it, l I would rather prefer to take every single photo with Night Sight mode and then crop them afterwards.
Huawei possibly used a super resolution algorithm when they used to have a monochrome camera. The zoom results were better than the RGB+monochrome 20 megapixel mode. It was called hybrid zoom. Nowadays Huawei just crops the 40 megapixel sensor for 2x zoom. They still have a hybrid zoom, but it doesn't seem to have anything in common with the monochrome+RGB hybrid zoom. Nowadays Huawei's hybrid zoom is effectively nothing more than digital zoom or stitching. The P30 Pro's 10x hybrid zoom doesn't capture more details than the periscope camera at 125mm (≈5x). This might be even not possible due to diffraction. The reason why a few people praise the 10x = 270mm image quality is because you still get (125/270)² x 8 ≈1.7effective megapixels, so nearly Full HD.
When it comes to multi frame noise reduction, Google's MFNR works with moving objects for years. They released HDR+ in ≈2013 and it has no issues with moving objects, sports, night, etc.. The "+" stands for low-light.
Astronomers use(d) this "technique" in the last 100 years... it is that you need to manually align the pictures, but this way they increase resolution and sensibility. Most used on analog photography. And Adobe do it too, also you "guide" the software.
I need to correct my comment: Sometimes Google's HDR+ leads to bad sharpening/noise reduction artifacts in extreme low-light conditions and the noise reduction can be too strong. Nonetheless it revolutionized smartphone low-light photography. Google's HDR+ was Google's first Night Sight mode.
I wouldn't say the Huawei 5x has 8 effective megapixels, even by bayer standards it's much worse (at the same magnification) than any 1" compact with a zoom that goes that far, probably worse than a "3x" cropped.
@Archirum @noisephotographer P30 series already uses pixel shift image stacking between 1.8x and 2.9x zoom range, and it works even without an OIS. This means that the regular P30 has one as well, but obviously P30 Pro performs better in this regard because its OIS can mitigate the hand jitter to some extent and intentionally create "shake" when the phone is stationary.
@JochenIs "Why do you say its effectively only 10MP?" Because IMX650 used in P30 Pro is a quad bayer sensor, where each pixel is basically split into 4 smaller ones. By default settings it treats 4 pixels as 1 huge pixel for improved SNR and faster processing speed but you can get full 40MP if you really want to, although it's filled with noise and artifacts due to array conversion and color interpolation processes involved.
I don’t think it’s anywhere near as advanced though.
AFAIK all of the similar desktop-software (including PhotoAcute) simply aligns the entire photos and takes an average (or median) of each pixel. Maybe they mask movement and just use a single frame for those areas.. Google’s algorithm is more sophisticated in that it doesn’t have to discard areas that moved, it can realign them at a sub-frame level to make use of that valuable information.
And in a fraction of a second, on a handheld computer!
And yet, let's all write off that m43 system as it is dead. People compare this to medium format from a tiny sensor, when a larger m43 sensor has been doing it for quite a while. ¯\_(ツ)_/¯
I have been using this technique for a while already, but that requires PP for alignment and averaging. I am certainly not the first person to do that. Anyone with any camera can do that NOW.
Yes, it is a tedious and labor intensive process and all the frames must be perfectly aligned in a photo editing program to achieve it. Google's super resolution is simply a stacking photo machine that also realigns the images using your phone's chips.
If you care to, you could split the channels with dcraw, ise green for alignment, merge and then debayer. But google is mich more robust in scenes with movement. You could do that as well, even detect moving/rotating objects, extract them, reconstruct them in 3D via MVS (or similar) then use all that to construct the final image. Or give google a few years to make it for you. But move quickly, they will probably kill it in another few years.
This is movement and occlusion resistant (within some limits, of course). Don't underestimate that. Read the paper before you say "this is just stacking, averaging, upscaling, etc". This is quite novel and on top of that, they have to do it within a 100ms budget on a phone. It is quite remarkable.
@piccolbo: No, the “budget” is 100ms per 12 megapixel RAW input burst frame. It’s still fast, especially with processing and all, but a 4 image input set, will still require between 400 and 500 ms, before the result is fully created. Still fast and in most cases with a really good result; especially compared to what “the old geezers” (like me) had to invest in time and effort to obtain even a half decent result with a “normal” camera.
That’s why I seldom use one of my “normal” cameras anymore. From a certain age and up, you don’t buy green bananas either ;-)
Exploiting our hands' tremor (in stead of sensor IBIS and/or OIS in lens) is very useful for 99% of shots - except those from a tripod (Still needed in future? pun intented). But no more demosaic is the best news!
"Subtle shifts from handheld shake and optical image stabilization (OIS) allow scene detail to be localized with sub-pixel precision, since shifts are unlikely to be exact multiples of a pixel.
... We get a red, green, and blue filter behind every pixel just because of the way we shake the lens, so there's no more need to demosaic ... The benefits are essentially similar to what you get when shooting pixel shift modes on dedicated cameras."
This made me think about an app called Hydra a couple years back on iOS with the same idea. Would love to see a Google Camera app for iPhone with this tech to try out.
I am hoping at least one struggling camera maker makes an android ILC as a desperation move. My phone runs circles around my camera as far as sw convenience, computational photography, uploading and sharing. Taking a panorama on my phone is fast, guided, reliable. On my camera it's trial and error, many times over. Night mode on camera is unbelievable. HDR is always on, will do the right thing if there is too much movement in the shot. It's day and night. The only problem, tiny sensor and lens.
It's a damn shame. SOOC, for general snapshot photography newer phones blow standalone cameras away. Even when comparing dSLR and high end enthusiast cameras. I've got nothing against them, they've just GOT to step up their game.
Doesn't have to do with Android so much as the computations (though Android would be nice. Yes Samsung did it, but it wasn't so much the marriage of camera to Android that failed, but the implementation and timing (back then computational was waaaaaay behind where it is now)).
Team up with Nokia or some such to gain IP rights, develop their own. Sony's already got it. Implement it! Jeez.
Remember reading an article not too long ago that Sony was "thinking about getting into more computational side of things to compete". Dear God. Get on the damn ball!
I did not mean the results are not good... I just said that software needs hardware to run, when hardware can do things on its own. That's why, IMHO, saying software is superior to hardware is somewhat questionable...
We have cameras with the readout-speed and framerate required. All we need now is a camera company that is willing to invest some money into software... so we are f#cked
Only the A9 comes anywhere close to the multiframe bursting capability of mobile sensors, and even that one is crippled by a weak ISP. It's also incredibly expensive.
Stacked BSI sensors become extremely expensive when scaled up to APS-C/FF sizes. This is why there is a grand total of TWO in current production - the A9 sensor and VENICE (which *might* possibly actually be the same sensor just backed by a much better ISP). Meanwhile stacked BSI (Exmor RS) has basically been standard in phones for years.
Panther fan’s comment just made me think that it wouldn’t surprise me if Sony were actually to do this in the future (not necessarily with whatever they’ve released up to this point). They’ve innovated a lot in the past 10 years (maybe more than anyone else?). It seems they’re due for another big release.
Ok, let's see - Olympus, Panasonic, Pentax and Sony already have a more controlled version of this (pixel shift).
If it's a static scene - just fire away, scale to 2x linear resolution, align and median merge. You can do it in PS or use any of scripts utilizing ImageMagick. It's slow process though.
What could be done with minimal additional hardware is to have very precise 6 axis sensor and use that for alignment (many cameras already have it for EIS in video). Also, camera might be able to use OIS to introduce controlled sjift even without IBIS, but this depends on the exact implementation of OIS.
"Stacked BSI sensors become extremely expensive when scaled up to APS-C/FF sizes. " This where the m43 can come into play when exploiting this type of technique. It's the goldilocks zone for this kind of technology to be adapted first for an ilc cameras that could hopefully match the quality of FF equivalent. Simulation of bokeh could be done on software anyway.
Well it is a little different than older methods like super resolution stacking and googles own previous HDR+ stacking. The algorithm now replaces the debayering step and is now more comparable to a bayer drizzle used in real high end astrophotography.
That said stacking algorithms have been around for a long time, even available to the amateur enthusiast. The "Game changer" here is that it is far far more robust against artifacts. Which makes it possible to be used by default instead of some very rare edge cases
Not quite... the burst with stacking requires the scene to not move at all, ideally the camera shouldn't move either... but some system do a better job at re-aligning the image. This is a big jump beyond that, this can stack a series of image where stuff moves a lot in the frame (if you had actually watch past the 90 second mark of the video... you will see they the algorithm can handle stacking photos take on a moving vehicle such that a lot of objects are moving at different speeds (cars in front move a lot, buildings int he midground move a bit, background moves little but it obscure by foreground elements). The algorithm can "figure out" what objects are moving from frame to frame (with parallax/rotation) so you can't actually re-aligning them and still stack them to improve the image.
The Casio cameras nearly always provise a sharp image from a burst, no matter how much movement. Artifacts? Yes, but not enough to not always have hi-res switched on.
Whilst I'm sure there would be no double image problem, not being a woman, I cannot drive & shoot @ the same time. However I do have seagull pictures taken with SRZoom. Not sure one can post pictures in comments....
You are never a passenger in a vehicle? Ok how about taking a photo of a two way road with cars going in both directions at 15 mph or higher (better yet if there are bicyclist moving too). The algorithm described in the video above (did you watch it) talks about how it avoids ghosting/artifacts when you have motion happening a multiple distances (e.g. foreground/midground/background) and that's where they are better than simple burst mode plus stacking with single pixel realignment.
No problem in daylight. Gets trickier in low light due to lower burst speed due to lower shutter speed. Anyway: hold on to those amazing Casio cameras, caus they don't make'm anymore :-(
I had a Casio FC100 and I loved using it. "No problem in daylight. Gets trickier in low light due to lower burst speed due to lower shutter speed. "
@telefunk I'm confused at why you seem to be so reluctant to to back up your claims. It would all of a few minutes. If works as great as you said... you could easily just blinding point it out your driver side window without looking... would be as distracting as opening/closing your window while driving. Similarly, walking outside of your home/office to take a photo of moderate speed traffic should only take at most five minutes of your time. The only reasons I can think of is that you aren't so sure anymore about your claims about how it works on fast moving subjects.
@telefunk Sure a photo of a bird flying in front of non-sky background (some trees, etc) would work. I just picked car traffic since 90% of the US population lives near car traffic and it has what I wanted to test out (i.e. a subject moving at moderate speed in front of a variety of foreground, midground, and background elements that move at different rates as you pan the camera).
Found exactly what you are looking for in my archives. Picture of trafic, taken from a moving car on the highway, in atrocious weather conditions @ very long end (+450mm) in SR resolution mode.
@telefunk sure... do the forum let you link to it or cna you title it in a way that make it easy for me to find via search like "telefunk sr zoom bird example"
While I can see why you would always keep it on as you said "Artifacts? Yes, but not enough to not always have hi-res switched on." As a matter of preference, I would probably avoid doing it as I personally don't like how the third and fourth images came out with the triple ghosting effect on the light poles and how the road looks like it has heave clone tool applied to it. Obviously YMMV, but I think even you agree that on a purely objective level, if you could this type of SR zoom without as many artifacts... it would be even better.
Some have "doubles" because of low shutter speed, or halo's because of my processing. Point being that no other camera would come near in those circumstances (1000mm, moving car, through windscreen, moving subject, low light, haha Canon or Fuji)...
'Some have "doubles" because of low shutter speed, or halo's because of my processing. Point being that no other camera would come near in those circumstances (1000mm, moving car, through windscreen, moving subject, low light, haha Canon or Fuji)..'
No camera.... *yet*. I agree with you up to that point. My point is that algorithm described in the article above could allow for a camera to remove the double because of low shutter sheet and halos due to processing. That's when you originally said "''same as what Casio has been doing for years ie SR zoom?" and than people tried to explain to you that this is a further improvement of that technique. As when Panther Fan first replied to you
'The "Game changer" here is that it is far far more robust against artifacts. Which makes it possible to be used by default instead of some very rare edge cases'
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Topaz Labs' flagship app uses AI algorithms to make some complex image corrections really, really easy. But is there enough here to justify its rather steep price?
Above $2500 cameras tend to become increasingly specialized, making it difficult to select a 'best' option. We case our eye over the options costing more than $2500 but less than $4000, to find the best all-rounder.
There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both the speed and focus to capture fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
Family moments are precious and sometimes you want to capture that time spent with loved ones or friends in better quality than your phone can manage. We've selected a group of cameras that are easy to keep with you, and that can adapt to take photos wherever and whenever something memorable happens.
What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.
While peak Milky Way season is on hiatus, there are other night sky wonders to focus on. We look at the Orion constellation and Northern Lights, which are prevalent during the winter months.
We've gone hands-on with Nikon's new 17-28mm F2.8 lens for its line of Z-mount cameras. Check out the sample gallery to see what kind of image quality it has to offer on a Nikon Z7 II.
The winning and finalist images from the annual Travel Photographer of the Year awards have been announced, showcasing incredible scenes from around the world. Check out the gallery to see which photographs took the top spots.
The a7R V is the fifth iteration of Sony's high-end, high-res full-frame mirrorless camera. The new 60MP Mark IV, gains advanced AF, focus stacking and a new rear screen arrangement. We think it excels at stills.
Using affordable Sony NP-F batteries and the Power Junkie V2 accessory, you can conveniently power your camera and accessories, whether they're made by Sony or not.
According to Japanese financial publication Nikkei, Sony has moved nearly all of its camera production out of China and into Thailand, citing geopolitical tensions and supply chain diversification.
A pro chimes in with his long-term impressions of DJI's Mavic 3. While there were ups and downs, filmmaker José Fransisco Salgado found that in his use of the drone, firmware updates have made it better with every passing month.
Landscape photography has a very different set of requirements from other types of photography. We pick the best options at three different price ranges.
AI is here to stay, so we must prepare ourselves for its many consequences. We can use AI to make our lives easier, but it's also possible to use AI technology for more nefarious purposes, such as making stealing photos a simple one-click endeavor.
This DIY project uses an Adafruit board and $40 worth of other components to create a light meter and metadata capture device for any film photography camera.
Scientists at the Green Bank Observatory in West Virginia have used a transmitter with 'less power than a microwave' to produce the highest resolution images of the moon ever captured from Earth.
The tiny cameras, which weigh just 1.4g, fit inside the padding of a driver's helmet, offering viewers at home an eye-level perspective as F1 cars race through the corners of the world's most exciting race tracks. In 2023, all drivers will be required to wear the cameras.
The new ultrafast prime for Nikon Z-mount cameras is a re-worked version of Cosina's existing Voigtländer 50mm F1 Aspherical lens for Leica M-mount cameras.
There are plenty of hybrid cameras on the market, but often a user needs to choose between photo- or video-centric models in terms of features. Jason Hendardy explains why he would want to see shutter angle and 32-bit float audio as added features in cameras that highlight both photo and video functionalities.
SkyFi's new Earth Observation service is now fully operational, allowing users to order custom high-resolution satellite imagery of any location on Earth using a network of more than 80 satellites.
In some parts of the world, winter brings picturesque icy and snowy scenes. However, your drone's performance will be compromised in cold weather. Here are some tips for performing safe flights during the chilliest time of the year.
The winners of the Ocean Art Photo Competition 2022 have been announced, showcasing incredible sea-neries (see what we did there?) from around the globe.
Venus Optics has announced a quartet of new anamorphic cine lenses for Super35 cameras, the Proteus 2x series. The 2x anamorphic lenses promise ease of use, accessibility and high-end performance for enthusiast and professional video applications.
We've shot the new Fujinon XF 56mm F1.2R WR lens against the original 56mm F1.2R, to check whether we should switch the lens we use for our studio test scene or maintain consistency.
Nature photographer Erez Marom continues his series about landscape composition by discussing the multifaceted role played by the sky in a landscape image.
The NONS SL660 is an Instax Square instant camera with an interchangeable lens design. It's made of CNC-milled aluminum alloy, has an SLR-style viewfinder, and retails for a $600. We've gone hands-on to see what it's like to shoot with.
Recently, DJI made Waypoints available for their Mavic 3 series of drones, bringing a formerly high-end feature to the masses. We'll look at what this flight mode is and why you should use it.
Astrophotographer Bray Falls was asked to help verify the discovery of the Andromeda Oxygen arc. He describes his process for verification, the equipment he used and where astronomers should point their telescopes next.
Comments