Does super resolution using Photoshop really work ?

d3xmeister

Senior Member
Messages
3,492
Solutions
6
Reaction score
2,551
Location
Alexandria, RO
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ? It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
 
My take:

As you note I don't think that technique is relevant any longer. No one stops anyone from trying to see what they get. Comparable to focus stacking and HDR stacking.

That kind of stacking to reduce noise is irrelevant given the higher quality of modern sensors and the often amazing results of software based noise reduction from Adobe and especially DXO. The world is awash in uber high megapixel sensors if you really need them.

As you are likely aware some cameras, like some OM models, have in camera high resolution modes that do the stacking internally and output a higher megapixel image. In my experience that is best done on a tripod. The resolution differences are easy to see with modest pixel peeping. individual images in PS might yield comparable results, user skill is always a factor, but the in-camera systems as I recall also shift the sensor slightly during capture so different images are being stacked.

Upscaling is a thornier proposition. Results are certainly better with some AI driven products than others but expectations need to be tempered by real world experience. Upscaling does not really match cropping from a high megapixel sensor to yield the same magnification, but it can look good enough.
 
Thank you for replying, though what you said does not address the topic directly. I would also say I don’t agree at all with what you imply. Just because there are high resolution cameras available (natively or through sensor shift) or other ways to get higher resolution and low noise (AI based software, etc) it doesn’t mean that other methods as well as other cameras become irrelevant. It doesn’t make sense to me. Everyone is using what they want, what they have, what they enjoy, what they can afford, in terms of cameras, software, computers.
 
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ?
Not quite. What is being aligned is the image projected onto the respective layers. Due to the random movement of the camera/sensor, the position of a specific fine detail in each sub-frame (layer) will differ slightly with respect to the precise position of image details relative to the pixel grid. In one sub-frame a specific detail may span across several pixels. In another sub-frame, the detail may be perfectly centered within a single green pixel. In yet another frame, the image detail may have been perfectly centered within a pixel, but the pixel is a red one. Thus, the random movement ensures that each pixel of each sub-frame is contributing at least slightly different spatial and color information to the aggregated whole. More spatial and color information ultimately means increased resolution.
It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
 
eing aligned is the image projected onto the respective layers. Due to the random movement of the camera/sensor, the position of a specific fine detail in each sub-frame (layer) will differ slightly with respect to the precise position of image details relative to the pixel grid. In one sub-frame a specific detail may span across several pixels. In another sub-frame, the detail may be perfectly centered within a single green pixel. In yet another frame, the image detail may have been perfectly centered within a pixel, but the pixel is a red one. Thus, the random movement ensures that each pixel of each sub-frame is contributing at least slightly different spatial and color information to the aggregated whole. More spatial and color information ultimately means increased resolution.
Correct.
 
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ?
Not quite. What is being aligned is the image projected onto the respective layers. Due to the random movement of the camera/sensor, the position of a specific fine detail in each sub-frame (layer) will differ slightly with respect to the precise position of image details relative to the pixel grid. In one sub-frame a specific detail may span across several pixels. In another sub-frame, the detail may be perfectly centered within a single green pixel. In yet another frame, the image detail may have been perfectly centered within a pixel, but the pixel is a red one. Thus, the random movement ensures that each pixel of each sub-frame is contributing at least slightly different spatial and color information to the aggregated whole. More spatial and color information ultimately means increased resolution.
But does that actually increase the number of pixels?
It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
 
I have run tests and it works. Requirements: Static subject (essentially motionless while images are being captured). Static camera on a steady platform or tripod. Must use a remote trigger or 10 second delay so you don't move the camera when you shoot.

These are the same restrictions you must adhere to when using in camera IBIS High Resolution shooting (if you camera has that mode). I have done this using my EOS R5.

The firmware update version 1.8.1 offered Canon EOS R5 users the ability to capture a stunning 400-megapixel image.
 
I have run tests and it works. Requirements: Static subject (essentially motionless while images are being captured). Static camera on a steady platform or tripod. Must use a remote trigger or 10 second delay so you don't move the camera when you shoot.
The OP is talking about the hand-held technique that can be used with any camera. It relies on slight movement to work.

https://www.dpreview.com/articles/0727694641/here-s-how-to-pixel-shift-with-any-camera

 
Last edited:
My take:

As you note I don't think that technique is relevant any longer. No one stops anyone from trying to see what they get. Comparable to focus stacking and HDR stacking.

That kind of stacking to reduce noise is irrelevant given the higher quality of modern sensors and the often amazing results of software based noise reduction from Adobe and especially DXO. The world is awash in uber high megapixel sensors if you really need them.

As you are likely aware some cameras, like some OM models, have in camera high resolution modes that do the stacking internally and output a higher megapixel image. In my experience that is best done on a tripod. The resolution differences are easy to see with modest pixel peeping. individual images in PS might yield comparable results, user skill is always a factor, but the in-camera systems as I recall also shift the sensor slightly during capture so different images are being stacked.

Upscaling is a thornier proposition. Results are certainly better with some AI driven products than others but expectations need to be tempered by real world experience. Upscaling does not really match cropping from a high megapixel sensor to yield the same magnification, but it can look good enough.
My understanding is that stacking is primarily used for enhanced depth of field, not for noise removal.
 
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ?
Not quite. What is being aligned is the image projected onto the respective layers. Due to the random movement of the camera/sensor, the position of a specific fine detail in each sub-frame (layer) will differ slightly with respect to the precise position of image details relative to the pixel grid. In one sub-frame a specific detail may span across several pixels. In another sub-frame, the detail may be perfectly centered within a single green pixel. In yet another frame, the image detail may have been perfectly centered within a pixel, but the pixel is a red one. Thus, the random movement ensures that each pixel of each sub-frame is contributing at least slightly different spatial and color information to the aggregated whole. More spatial and color information ultimately means increased resolution.
But does that actually increase the number of pixels?
Yes, it does. The usual strategy requires that you take a bunch of handheld shots of the scene (at least 8 and up to 20 or so). Then you import them as a stack into Photoshop and perform a 200% upsizing using the nearest neighbor interpolation option. This step has the effect of (more or less) doubling linear resolution and (more or less) increases pixel count by 4x. The "more or less" qualification is due to the fact that the alignment step that follows the upsizing will probably result in some slight gaps along the image perimeter that necessitates a bit of cropping. After the alignment step, you can manually blend the layers by progressively reducing opacity of each layer, but my preferred method for doing the blending is to convert the stack into a layer-based smart object and then selecting either Mean or Median as the blending mode.

YMMV, but it usually does, indeed, improve visible resolution and detail of the image (to say nothing of the reduced noise, moire and aliasing). The same basic concept is utilized in the handheld high resolution mode pioneered by Olympus and now available in numerous recent vintage cameras.
It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
 
Last edited:
It never really worked for me, that's why I asked. Yes there is a small difference especially in zones with moire or in noisy areas of the photo (like pulled shadows with fine details) but it is so small that you have to squint for it at 200% zoom. To me that qualifies as "no difference"

Like I said, I used this technique a lot in the past to reduce noise and get more DR for lifted shadows (without the upscaling step) and it worked great, but the claim of more resolution never made sense for me, and every time I tried it it did not resulted in more resolution in any significant way.

Then I thought, how can it result in higher resolution if you align the photos before the averaging process ?

Using Olympus high-res mode when I shot Olympus, yes, that did resulted in significantly more resolution than a single file, but there I can see why, as the proces does not align the photos.
 
It never really worked for me, that's why I asked. Yes there is a small difference especially in zones with moire or in noisy areas of the photo (like pulled shadows with fine details) but it is so small that you have to squint for it at 200% zoom. To me that qualifies as "no difference"
I agree that the resolution increase improvement in terms of increased resolved detail is usually modest. In terms of decreased aliasing and reduction of false detail (which are also resolution-dependent), the improvement is usually pretty obvious. Of course, YMMV depending on technique, scene, lens used and probably other factors I'm not thinking about, but, yes, it's not going to be miraculous. Using specialized software, such as PhotoAcute to do the aligning and interpolation (super-resolution) always worked considerably better for me than using a general purpose toolset like ACR/Photoshop. Alas, support for PhotoAcute has long been abandoned. In it's day, it was impressive.
Like I said, I used this technique a lot in the past to reduce noise and get more DR for lifted shadows (without the upscaling step) and it worked great, but the claim of more resolution never made sense for me, and every time I tried it it did not resulted in more resolution in any significant way.

Then I thought, how can it result in higher resolution if you align the photos before the averaging process ?

Using Olympus high-res mode when I shot Olympus, yes, that did resulted in significantly more resolution than a single file, but there I can see why, as the proces does not align the photos.
As you know, there are two HiRes modes available in the most recent generations of the Oly/OM cameras: Tripod HR and Handheld HR. The tripod-dependent mode assumes absolute stillness of the camera and scene and relies on controlled sensor shift to increase the spatial and color information collected. The alignment step is greatly simplified because the movement is controlled, but it's still done. This method differs from how some other cameras (e.g., some Hasselblads and Pentax cameras) utilize full one-pixel shifts to collect additional color-only information. This pixel shift technique doesn't increase spatial resolution but it eliminates the need for demosaicing, which otherwise introduces some blurring.

The HHHR mode works identically to the super-resolution technique in that it relies on random sensor movement and then utilizes some kind of algorithm to align the subframes. The point here is that both Oly/OM HiRes modes do, in fact, align the photos.
 
It never really worked for me, that's why I asked. Yes there is a small difference especially in zones with moire or in noisy areas of the photo (like pulled shadows with fine details) but it is so small that you have to squint for it at 200% zoom. To me that qualifies as "no difference"
I agree that the resolution increase improvement in terms of increased resolved detail is usually modest. In terms of decreased aliasing and reduction of false detail (which are also resolution-dependent), the improvement is usually pretty obvious. Of course, YMMV depending on technique, scene, lens used and probably other factors I'm not thinking about, but, yes, it's not going to be miraculous. Using specialized software, such as PhotoAcute to do the aligning and interpolation (super-resolution) always worked considerably better for me than using a general purpose toolset like ACR/Photoshop. Alas, support for PhotoAcute has long been abandoned. In it's day, it was impressive.
I agree. Interestingly now that you mentioned it I remember playing with a software like that very long time ago (back when we still had 6MP cameras), and the results were impressive. I don't remember how was it called though, it could have been PhotoAcute. And there was also a software for mobile some years later, I thing it was called hydra ? I remember playing with it on my iPhone 4S but I don't remember how well it worked.
Like I said, I used this technique a lot in the past to reduce noise and get more DR for lifted shadows (without the upscaling step) and it worked great, but the claim of more resolution never made sense for me, and every time I tried it it did not resulted in more resolution in any significant way.

Then I thought, how can it result in higher resolution if you align the photos before the averaging process ?

Using Olympus high-res mode when I shot Olympus, yes, that did resulted in significantly more resolution than a single file, but there I can see why, as the proces does not align the photos.
As you know, there are two HiRes modes available in the most recent generations of the Oly/OM cameras: Tripod HR and Handheld HR. The tripod-dependent mode assumes absolute stillness of the camera and scene and relies on controlled sensor shift to increase the spatial and color information collected. The alignment step is greatly simplified because the movement is controlled, but it's still done. This method differs from how some other cameras (e.g., some Hasselblads and Pentax cameras) utilize full one-pixel shifts to collect additional color-only information. This pixel shift technique doesn't increase spatial resolution but it eliminates the need for demosaicing, which otherwise introduces some blurring.

The HHHR mode works identically to the super-resolution technique in that it relies on random sensor movement and then utilizes some kind of algorithm to align the subframes. The point here is that both Oly/OM HiRes modes do, in fact, align the photos.
I only used the standard high res on Olympus, E-M5 II and E-M1 II, but it did resulted in visible details, as in text on air conditioning units in a frame for example, you could not be read on the standard photo but they looked clear and readable on high res photos. This is what I remember. I'm not exactly sure how it does it, maybe it does align the photos but the results were far more impressive than what I got in Photoshop
 
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ? It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
Be aware that Adobe has a feature called "Super Resolution" that can be pretty effective at doubling each linear dimension when working with RAW files shot with good technique and sharp lenses. I had good upscaling results with a Sony 28-60 @ f5.6 on a tripod-mounted a7RIII. OTOH, I saw no benefit in an image shot hand-held with a Panasonic FZ1000.

--
Event professional for 20+ years, travel & landscape enthusiast for 30+.
http://jacquescornell.photography
http://happening.photos
 
Last edited:
We all know this technique, there are hundreds of tutorials on how to do it, but basically you shoot multiple shots (20-30), you load them in photoshop as layers, upscale them to 200% or 400%, align the layers, then convert to smart object using Mean or Median method.

All say this should result in higher resolution and less noise. While I used a variants of this method a lot in the past for reducing noise (back when we had cameras where ISO 400 was the max you could shoot at), for me this method never worked to up the resolution.

Many tutorials compare this technique with High-Res mode in cameras, but is it really ? How can the resolution increase since you align the layers before you do the averaging ? It makes no sense to me, doesn't this completely cancels the slight movements you made with the camera at capture time, which is exactly what is supposed to be the sourse of the higher resolution ?
Be aware that Adobe has a feature called "Super Resolution" that can be pretty effective at doubling each linear dimension when working with RAW files shot with good technique and sharp lenses. I had good upscaling results with a Sony 28-60 @ f5.6 on a tripod-mounted a7RIII. OTOH, I saw no benefit in an image shot hand-held with a Panasonic FZ1000.
I don't use Lightroom and I probably never will. However the Gigapixel AI 8 I tested this week it works wonders for upscale. However, i was more interested in this specific technique since I always see it as a method to up the resolution and it never really worked for me when I tried it.
 

Keyboard shortcuts

Back
Top