Why is there no progress Dynamic range?

Sensor technology is at its upper limit in terms of DR. To improve on that, you would need a completely different technology not based on Bayer sensors, which doesn't exist.

Personally, I think that the next meaningful step in improved DR will come from speed. The Sony A9iii has something called composite RAW where it takes 4, 8, 16, or 32 RAW photos in rapid succession (at 120 fps) which you can later merge on a computer using Sony Imaging Edge to reduce noise. AFAIK this currently does not allow you to include exposure bracketing.

However with a sensor like the one in the A9iii, if you add enough CPU power for let's say a maximum full RAW fps of 240, you could in theory take 2 full RAW photos at 1/120s each at different exposures and merge them into a single image with more DR and less noise. Or 4 RAWs at 1/60s each.

Phone cameras do this all the time of course, they have a lot less sensor data to process and a lot more processing power. Cameras will move in that direction and one day, we'll be able to exposure bracket at useful shutter speeds; somewhere down the road, the cameras will even do the merging in-camera on the fly.
if you simply stack exposures you reduce noise but don't exceed the maximum dynamic range limit given by the bit depth of the camera which is 14 stops

At some point you will have removed all the noise but that is pretty much it.

Generally for daylight exposures at base ISO stacking has little benefit and so does noise reduction as the read noise has limited impact on the expousure. Either way you can simply apply noise reduction to improve your image a little.

In order to get greater dynamic range you need higher bit depth for example 16 bits RAW then you have the headroom to allow other techniques to kick in
Hence why exposure bracketing needs to be included in composite RAW for increased dynamic range.
 
Alexa does it with video. I don't know which methods they use but it is possible. I've seen some comments suggesting that they combine multiple parallel signals from the sensor, similar to what Panasonic has tried with dynamic range boost.

 
Alexa does it with video. I don't know which methods they use but it is possible. I've seen some comments suggesting that they combine multiple parallel signals from the sensor, similar to what Panasonic has tried with dynamic range boost.

https://www.arri.com/en/camera-systems/cameras/alexa-35/alexa-35-high-dynamic-range
This is the development we want :)
I think the budget required to improve dynamic range in the camera industry is not reasonable.
Otherwise nothing is impossible.
 
Alexa does it with video. I don't know which methods they use but it is possible. I've seen some comments suggesting that they combine multiple parallel signals from the sensor, similar to what Panasonic has tried with dynamic range boost.

https://www.arri.com/en/camera-systems/cameras/alexa-35/alexa-35-high-dynamic-range
This is the development we want :)
I think the budget required to improve dynamic range in the camera industry is not reasonable.
Otherwise nothing is impossible.
This old thread on the underlying tech may be worth reading: https://www.dpreview.com/forums/thread/4308592
 
Since the Sony a7rii in 2015, there has been no revolutionary improvement in dynamic range. Will it always be like this? Not everyone needs a very fast camera, the improvements have always been in speed.
Actually a very fast camera is the way to get better DR.

Sony has a feature called Composite RAW Shooting which requires speed, and in fact even the A9III is not fast enough to fully take advantage of this. It would be better if the camera could shoot at 960fps or even faster.

5d2dcf7a469b46058b52416b15fdce66.jpg.png


So yeah, if you want improved DR way beyond what is currently possible with slow cameras, then you absolutely want more speed.
 
Last edited:
Sensor technology is at its upper limit in terms of DR. To improve on that, you would need a completely different technology not based on Bayer sensors, which doesn't exist.

Personally, I think that the next meaningful step in improved DR will come from speed. The Sony A9iii has something called composite RAW where it takes 4, 8, 16, or 32 RAW photos in rapid succession (at 120 fps) which you can later merge on a computer using Sony Imaging Edge to reduce noise. AFAIK this currently does not allow you to include exposure bracketing.

However with a sensor like the one in the A9iii, if you add enough CPU power for let's say a maximum full RAW fps of 240, you could in theory take 2 full RAW photos at 1/120s each at different exposures and merge them into a single image with more DR and less noise. Or 4 RAWs at 1/60s each.

Phone cameras do this all the time of course, they have a lot less sensor data to process and a lot more processing power. Cameras will move in that direction and one day, we'll be able to exposure bracket at useful shutter speeds; somewhere down the road, the cameras will even do the merging in-camera on the fly.
if you simply stack exposures you reduce noise but don't exceed the maximum dynamic range limit given by the bit depth of the camera which is 14 stops

At some point you will have removed all the noise but that is pretty much it.

Generally for daylight exposures at base ISO stacking has little benefit and so does noise reduction as the read noise has limited impact on the expousure. Either way you can simply apply noise reduction to improve your image a little.

In order to get greater dynamic range you need higher bit depth for example 16 bits RAW then you have the headroom to allow other techniques to kick in
Hence why exposure bracketing needs to be included in composite RAW for increased dynamic range.
There is no need to do it in camera you can do it in post and most time I shoot bracketed I select one shot and discard the others and end up not bracketing at all

You can also stack in post just shooting a burst

Generally Sony cameras cant blend and need an external software at that point you do it yourself as you wish it makes no difference raw or not if you have a small number of exposures

Olympus has been doing in camera processing for a long time and on 20 megapixels made lots of sense

Panasonic has done it well with hand helf high resolution in full frame

Sony is lagging and adding features that you can easily replicate in post and are not interesting

If you shoot a burst at 20 fps you can take 16 shots in 0.8 seconds those are around 800 MB which the camera should be able to average but if not you just process them in post with photoshop the cons is that they become huge TIFF files however by the time you stacked them and compress you have a single large TIFF and can discard the rest

This capability already exists today and I rather stack with many bits precision and floating point that have the camera round up or down due to processing limitation

--

If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
I have always wanted camera technology to approach/equal/surpass human vision. Or at least the perception we have with our vision. Things have improved but as noted, progress has been slow in comparison to speed and megapixels.

Find me a photographer that does not want to be able to point a camera at a scene with a bright sky, expose for it and have the rest of the image be properly exposed as well.

Right now, the rest of the scene will be underexposed requiring post corrections to lift shadows etc... While the tech arguments are against, I will point out those same arguments were around back when the DR was in the 8-9 stop range.

Maybe the terminology is being confused, but I am just tired al always having to "fix" photographs from my expensive gear because they look nothing like what I saw in person until I bend them in Lightroom.

To read this thread, it would seem what I am after will never happen.
 
Since the Sony a7rii in 2015, there has been no revolutionary improvement in dynamic range. Will it always be like this? Not everyone needs a very fast camera, the improvements have always been in speed.
The A7RII introduced the 42 megapixel sensor but the dr is 1 full stop leas than current models

The A7 III has been significant improvement and since only minor increases

The sensor ranking of the 42 megapixel A7R III is the same of the current A7R V

So in the effect 8 years pretty static as back illuminated sensor have matured

now I think stacked sensor will improve to match

the other question is do you need more peak DR or other things like low light performance color depth etc etc
I photograph people living in harsh nature, dynamic range is very important to me, I have Sony a7iii and Sony a7riii. Unfortunately, there are no developments that increase dynamic range and iso performance.

2d9c4244aa9d49fb9257d2549c2ba4af.jpg


One of the sample photos I took
In such case all you can do is to get the A9iii and do exposure brackets in the highest speed possible. That's how speed returns you favor, though at a cost of processing and slight motion blurs.
this is the only way to increase DR and the a9iii will in a firmware update shoot brackets at 1/40 sec and process in camera. other wise DR will remain the same unless the new a7v has variable clock speed for scenes 🤔😁
 
Also big fan of dynamic range. DXOMark lists the dynamic range of most cameras, and my Nikon D810 gets a touchdown in that department. It's much easier to tame blown out highlights and deep shadows when using Nikon's 'Active D-Lighting'. Try it, you'll like it. ;-)


D810 windowlight shot at the Select Models shoot.
 
Sensor technology is at its upper limit in terms of DR. To improve on that, you would need a completely different technology not based on Bayer sensors, which doesn't exist.

Personally, I think that the next meaningful step in improved DR will come from speed. The Sony A9iii has something called composite RAW where it takes 4, 8, 16, or 32 RAW photos in rapid succession (at 120 fps) which you can later merge on a computer using Sony Imaging Edge to reduce noise. AFAIK this currently does not allow you to include exposure bracketing.

However with a sensor like the one in the A9iii, if you add enough CPU power for let's say a maximum full RAW fps of 240, you could in theory take 2 full RAW photos at 1/120s each at different exposures and merge them into a single image with more DR and less noise. Or 4 RAWs at 1/60s each.

Phone cameras do this all the time of course, they have a lot less sensor data to process and a lot more processing power. Cameras will move in that direction and one day, we'll be able to exposure bracket at useful shutter speeds; somewhere down the road, the cameras will even do the merging in-camera on the fly.
if you simply stack exposures you reduce noise but don't exceed the maximum dynamic range limit given by the bit depth of the camera which is 14 stops

At some point you will have removed all the noise but that is pretty much it.

Generally for daylight exposures at base ISO stacking has little benefit and so does noise reduction as the read noise has limited impact on the expousure. Either way you can simply apply noise reduction to improve your image a little.

In order to get greater dynamic range you need higher bit depth for example 16 bits RAW then you have the headroom to allow other techniques to kick in
Hence why exposure bracketing needs to be included in composite RAW for increased dynamic range.
There is no need to do it in camera you can do it in post and most time I shoot bracketed I select one shot and discard the others and end up not bracketing at all

You can also stack in post just shooting a burst

Generally Sony cameras cant blend and need an external software at that point you do it yourself as you wish it makes no difference raw or not if you have a small number of exposures

Olympus has been doing in camera processing for a long time and on 20 megapixels made lots of sense

Panasonic has done it well with hand helf high resolution in full frame

Sony is lagging and adding features that you can easily replicate in post and are not interesting

If you shoot a burst at 20 fps you can take 16 shots in 0.8 seconds those are around 800 MB which the camera should be able to average but if not you just process them in post with photoshop the cons is that they become huge TIFF files however by the time you stacked them and compress you have a single large TIFF and can discard the rest

This capability already exists today and I rather stack with many bits precision and floating point that have the camera round up or down due to processing limitation
Yes. My point was, when your camera can do 240 fps, you start to get high enough speeds to be useful for those of us who only shoot moving targets, event photographers in particular.

Personally, I would like 960 fps with the camera instantly merging 4 x 1/240s RAWs into a single low noise, high DR RAW photo so I don't need to even think about the camera doing exposure bracketing on the fly.

But that tech is many years away on full frame cameras. 2035 maybe.

--
www.luxpraguensis.com
 
Last edited:
Also big fan of dynamic range. DXOMark lists the dynamic range of most cameras, and my Nikon D810 gets a touchdown in that department. It's much easier to tame blown out highlights and deep shadows when using Nikon's 'Active D-Lighting'. Try it, you'll like it. ;-)
Actually - that's a trick where the camera won't allow you to use the lowest (base) ISO and does this to prevent highlights from getting blown out. Sony's Version is called D-Range Optimizer. It doesn't expand Dynamic Range - it shifts it.

https://www.learningwithexperts.com...7ShIXJgE8bb4wireMGSY0IUkqBd-fleOCWilRfEQ9-VTH
 
Also big fan of dynamic range. DXOMark lists the dynamic range of most cameras, and my Nikon D810 gets a touchdown in that department. It's much easier to tame blown out highlights and deep shadows when using Nikon's 'Active D-Lighting'. Try it, you'll like it. ;-)
Actually - that's a trick where the camera won't allow you to use the lowest (base) ISO and does this to prevent highlights from getting blown out. Sony's Version is called D-Range Optimizer. It doesn't expand Dynamic Range - it shifts it.

https://www.learningwithexperts.com...7ShIXJgE8bb4wireMGSY0IUkqBd-fleOCWilRfEQ9-VTH
In reality it's still the base ISO, just displayed/mapped to a higher iso value to make sure that the auto exposure system don't over expose the high lights. Then in the jpeg engine it would recover the underexposed area accordingly, at the cost of noise of course.
 
Last edited:
Sensor technology is at its upper limit in terms of DR. To improve on that, you would need a completely different technology not based on Bayer sensors, which doesn't exist.

Personally, I think that the next meaningful step in improved DR will come from speed. The Sony A9iii has something called composite RAW where it takes 4, 8, 16, or 32 RAW photos in rapid succession (at 120 fps) which you can later merge on a computer using Sony Imaging Edge to reduce noise. AFAIK this currently does not allow you to include exposure bracketing.

However with a sensor like the one in the A9iii, if you add enough CPU power for let's say a maximum full RAW fps of 240, you could in theory take 2 full RAW photos at 1/120s each at different exposures and merge them into a single image with more DR and less noise. Or 4 RAWs at 1/60s each.

Phone cameras do this all the time of course, they have a lot less sensor data to process and a lot more processing power. Cameras will move in that direction and one day, we'll be able to exposure bracket at useful shutter speeds; somewhere down the road, the cameras will even do the merging in-camera on the fly.
if you simply stack exposures you reduce noise but don't exceed the maximum dynamic range limit given by the bit depth of the camera which is 14 stops

At some point you will have removed all the noise but that is pretty much it.

Generally for daylight exposures at base ISO stacking has little benefit and so does noise reduction as the read noise has limited impact on the expousure. Either way you can simply apply noise reduction to improve your image a little.

In order to get greater dynamic range you need higher bit depth for example 16 bits RAW then you have the headroom to allow other techniques to kick in
Hence why exposure bracketing needs to be included in composite RAW for increased dynamic range.
There is no need to do it in camera you can do it in post and most time I shoot bracketed I select one shot and discard the others and end up not bracketing at all

You can also stack in post just shooting a burst

Generally Sony cameras cant blend and need an external software at that point you do it yourself as you wish it makes no difference raw or not if you have a small number of exposures

Olympus has been doing in camera processing for a long time and on 20 megapixels made lots of sense

Panasonic has done it well with hand helf high resolution in full frame

Sony is lagging and adding features that you can easily replicate in post and are not interesting

If you shoot a burst at 20 fps you can take 16 shots in 0.8 seconds those are around 800 MB which the camera should be able to average but if not you just process them in post with photoshop the cons is that they become huge TIFF files however by the time you stacked them and compress you have a single large TIFF and can discard the rest

This capability already exists today and I rather stack with many bits precision and floating point that have the camera round up or down due to processing limitation
Yes. My point was, when your camera can do 240 fps, you start to get high enough speeds to be useful for those of us who only shoot moving targets, event photographers in particular.

Personally, I would like 960 fps with the camera instantly merging 4 x 1/240s RAWs into a single low noise, high DR RAW photo so I don't need to even think about the camera doing exposure bracketing on the fly.

But that tech is many years away on full frame cameras. 2035 maybe.
 
You’re correct. I oversimplified and stated it wrong.
 
OM live composite is a great example of how in camera can be useful in the field. You can see the interval shots impact in realtime.
 
OM live composite is a great example of how in camera can be useful in the field. You can see the interval shots impact in realtime.
You can do the same stacking burst in post

When it comes to noise reduction this goes with the square root of the number of frames taken best case. This is why I said you need at least 16 shots

Likewise if you stack 16 exposure together you end up with the effect of a ND4 with 64 of an ND8

This has been available for some time is the same story with light painting with and without live composite

the problem is that you need a lot of raw to stack 64 frames or you need to average them one on top of each other that looses data

But in general Olympus or OM systems features in computational photography are light years ahead Sony and others
 
OM live composite is a great example of how in camera can be useful in the field. You can see the interval shots impact in realtime.
You can do the same stacking burst in post
My primary interest was in the live feature for time based smoothing of water scenes. (Nd alternative)
When it comes to noise reduction this goes with the square root of the number of frames taken best case. This is why I said you need at least 16 shots

Likewise if you stack 16 exposure together you end up with the effect of a ND4 with 64 of an ND8

This has been available for some time is the same story with light painting with and without live composite

the problem is that you need a lot of raw to stack 64 frames or you need to average them one on top of each other that looses data
Additive exposure with very small exposure is another interesting technique.
But in general Olympus or OM systems features in computational photography are light years ahead Sony and others
Sure. And as has been discussed only having 20mp helps.
 
OM live composite is a great example of how in camera can be useful in the field. You can see the interval shots impact in realtime.
You can do the same stacking burst in post
My primary interest was in the live feature for time based smoothing of water scenes. (Nd alternative)
When it comes to noise reduction this goes with the square root of the number of frames taken best case. This is why I said you need at least 16 shots

Likewise if you stack 16 exposure together you end up with the effect of a ND4 with 64 of an ND8

This has been available for some time is the same story with light painting with and without live composite

the problem is that you need a lot of raw to stack 64 frames or you need to average them one on top of each other that looses data
Additive exposure with very small exposure is another interesting technique.
But in general Olympus or OM systems features in computational photography are light years ahead Sony and others
Sure. And as has been discussed only having 20mp helps.
This is also where the higher read out speed helps. Doing Digital-ND stacking needs your readout speed to be at least about half of the current shutter speed. Otherwise the intervals would be very noticeable.
 
OM live composite is a great example of how in camera can be useful in the field. You can see the interval shots impact in realtime.
You can do the same stacking burst in post
My primary interest was in the live feature for time based smoothing of water scenes. (Nd alternative)
When it comes to noise reduction this goes with the square root of the number of frames taken best case. This is why I said you need at least 16 shots

Likewise if you stack 16 exposure together you end up with the effect of a ND4 with 64 of an ND8

This has been available for some time is the same story with light painting with and without live composite

the problem is that you need a lot of raw to stack 64 frames or you need to average them one on top of each other that looses data
Additive exposure with very small exposure is another interesting technique.
But in general Olympus or OM systems features in computational photography are light years ahead Sony and others
Sure. And as has been discussed only having 20mp helps.
This is also where the higher read out speed helps. Doing Digital-ND stacking needs your readout speed to be at least about half of the current shutter speed. Otherwise the intervals would be very noticeable.
If the shutter speed is high there is no point stacking identical exposure

there are computational way to remove movement see panasonic hr shot
 
I photograph people living in harsh nature, dynamic range is very important to me, I have Sony a7iii and Sony a7riii. Unfortunately, there are no developments that increase dynamic range and iso performance.

2d9c4244aa9d49fb9257d2549c2ba4af.jpg


One of the sample photos I took
The DR of all current cameras should cover much more than the tonal range shown here.
There are nothing better than to expose your raw files carefully for the highlights if you want the best image quality. This will also ensure shadow detail as deep as possible for single frame capture (not multi frame HDR technique).
 
Last edited:

Keyboard shortcuts

Back
Top