What new computational feature would you want?

The ability to remove the 4/3 sensor and slide in a full frame sensor would be awesome. would be like having a TC for any lens.
 
1) Handheld night (starlight) mode with output to raw and control over iso and shutter speed.
You're kidding, right? HH mode for starlight focus would need shutter speed times in excess of IBIS capacity. This is a contradictory request. Handheld relies on IBIS and once the timing lines cross, one can no longer handhold. So the “control over shutter speed” ask would select timing to greater than IBIS capacity.
2) Improved HDR mode with output to raw.
It’s already done. RTFM. RAW is not a format. It’s a digital file and each OEM turns it into a format. HDR is a format composited from a series of RAW files, multiple exposures averaged. That’s called exposure bracketing if you want multiple RAWs. The problem with keeping it “RAW” the way you propose is that you’re effectively roundtripping into the JPEG process to combine images, and then re-outputting to RAW. There’s a reason why this is not done in-camera as a RAW.
3) Improved HHHR with motion compensation.
Again, a contradiction. IBIS cannot control for subject motion. In theory an extremely advanced AI could do so, but that would be substituting pixels not in original capture, so fake data, not computational assist. You seem to fundamentally misunderstand “handheld”, and the basic laws of physics.
4) A general frame averaging mode with output to raw and selection as to many frames are averaged, including frame alignment.
More nonsense.
 
after reading all the previous posts (for inspiration) my wishes would be:

- multi spot metering ala OM4. I had the OM4 a million years ago and used this all the time with K64 film

- Hand Held Starlight (if that's the correct name), multiple shots, high ISO, low noise, 20meg output at 14 bits. Actually thinking about this isn't this just HHHR with a smaller file.

- ability to double up on the current offerings e.g, HiRes plus GND together

- output current CP functionality to RAW as well as JPEG

--
Alan Scott
"Everything we hear is an opinion, not a fact. Everything we see is a perspective, not the truth" - Marcus Aurelius
 
Last edited:
This would be my top two priorities:
  • Much faster equivalent to handheld hi-res to significantly improve SNR (and all the benefits that come with that) while decreasing motion artifacts, both in good and especially in low light. This has been discussed earlier in this thread.
Removing motion artifacts in HR modes decreases SNR where the artifacts initially occurred.
Was not referring to the utilization of software-driven motion artifact removal. Was referring to markedly increasing the rate of multiple image acquisition to decrease the necessity for such removal in the first place.
The selected shutter speed will always limit the rate of image acquisition.
Exactly, and I suspect too often folks new to multi-shot features like HHHR revert back to their old habits of way to slow of shutter speed to keep the ISO low in low light situations. In fact the gem of multiple image stacking is to allow for faster shutter speeds, even if high ISO is needed, and let the stacking take care of the noise. The faster shutter speeds also goes a long way to mitigate motion blur in the image.
  • A full resolution, much faster, and AI-driven version of Panasonic's 4K Defocus feature. Select the item you want on the touchscreen or via the joystick, have AI detect that, and then map distance in front and behind it. Plug in desired DOF (shallow or deep) and have it rapidly capture multiple exposures and subsequently bake the desired DOF into a RAW, JPEG, and/or HEIF file.
The ND/grad ND is a bonus, but that functionality can easily be replaced with an inexpensive filter.
A long exposure (vs a series of short exposures) has the disadvantage of increased correlated noise that may need LENR. Also, LiveND/frame-averaging allows fine-tuning the long exposure without swapping a series of ND filters. There are also advantages of ND filters.
Yes. Despite that, it would not be my first priority. And with item 1 (above), there would be a reduced impact of increased noise with signal averaging- including with a physical filter. So while I wouldn't say no to having the LiveND feature in a camera, its utility is of marginal value for me.
I think Olympus owners underestimate the usefulness of LiveND. Only Phase One has something similar and is a highly valued feature.
The two options above would mitigate the need for a FF or larger sensor camera. It would, however, require a healthy processor and memory upgrade given the processing speed and storage space required to make the calculations.

Put these two features in a "smallish" rangefinder MFT camera and you've got a winner.
 
after reading all the previous posts (for inspiration) my wishes would be:

- multi spot metering ala OM4. I had the OM4 a million years ago and used this all the time with K64 film

- Hand Held Starlight (if that's the correct name), multiple shots, high ISO, low noise, 20meg output at 14 bits. Actually thinking about this isn't this just HHHR with a smaller file.
Yes…. and no. The concept is essentially the same, multiple Hi ISO shots stacked in-camera to result in a noise free image. The difference is that with HH Starlight there is little or no ability to alter the camera selected shooting parameters… for instance the ISO used is entirely at the discretion of the algorithm in the camera.

But for a quick and dirty, almost point and shoot, solution to lowlight situations it is pretty darn handy with amazingly good results (JPEG only)
- ability to double up on the current offerings e.g, HiRes plus GND together

- output current CP functionality to RAW as well as JPEG
 
after reading all the previous posts (for inspiration) my wishes would be:

- multi spot metering ala OM4. I had the OM4 a million years ago and used this all the time with K64 film

- Hand Held Starlight (if that's the correct name), multiple shots, high ISO, low noise, 20meg output at 14 bits. Actually thinking about this isn't this just HHHR with a smaller file.
Yes…. and no. The concept is essentially the same, multiple Hi ISO shots stacked in-camera to result in a noise free image. The difference is that with HH Starlight there is little or no ability to alter the camera selected shooting parameters… for instance the ISO used is entirely at the discretion of the algorithm in the camera.

But for a quick and dirty, almost point and shoot, solution to lowlight situations it is pretty darn handy with amazingly good results (JPEG only)
How does it fare when compared to a single raw shot with applied AI NR?
- ability to double up on the current offerings e.g, HiRes plus GND together

- output current CP functionality to RAW as well as JPEG
 
after reading all the previous posts (for inspiration) my wishes would be:

- multi spot metering ala OM4. I had the OM4 a million years ago and used this all the time with K64 film

- Hand Held Starlight (if that's the correct name), multiple shots, high ISO, low noise, 20meg output at 14 bits. Actually thinking about this isn't this just HHHR with a smaller file.
Yes…. and no. The concept is essentially the same, multiple Hi ISO shots stacked in-camera to result in a noise free image. The difference is that with HH Starlight there is little or no ability to alter the camera selected shooting parameters… for instance the ISO used is entirely at the discretion of the algorithm in the camera.

But for a quick and dirty, almost point and shoot, solution to lowlight situations it is pretty darn handy with amazingly good results (JPEG only)
How does it fare when compared to a single raw shot with applied AI NR?
- ability to double up on the current offerings e.g, HiRes plus GND together

- output current CP functionality to RAW as well as JPEG
It has been some time since I compared the processed raw (Topaz NR) to the stacked JPEG, but gave up on the Raw as being more work to get even close to the JPEG. I routinely use JPEG’s and Raws interchangeably, so the JPEG’s were fine with me. I suspect if I were one that avoids anything JPEG than I most likely would never use this feature anyway.

I should note that I mostly use HH Starlight with cameras and lenses that don’t have as good of IBIS where one might be OK with handholding lowerISO/slower shutter speeds, like an OM 1 with a sync stabilized 100-400.
 
Last edited:
This would be my top two priorities:
  • Much faster equivalent to handheld hi-res to significantly improve SNR (and all the benefits that come with that) while decreasing motion artifacts, both in good and especially in low light. This has been discussed earlier in this thread.
Removing motion artifacts in HR modes decreases SNR where the artifacts initially occurred.
Was not referring to the utilization of software-driven motion artifact removal. Was referring to markedly increasing the rate of multiple image acquisition to decrease the necessity for such removal in the first place.
The selected shutter speed will always limit the rate of image acquisition.
Exactly, and I suspect too often folks new to multi-shot features like HHHR revert back to their old habits of way to slow of shutter speed to keep the ISO low in low light situations. In fact the gem of multiple image stacking is to allow for faster shutter speeds, even if high ISO is needed, and let the stacking take care of the noise. The faster shutter speeds also goes a long way to mitigate motion blur in the image.
FYi this response is primarily for SrMi, but including here since I agree with Gary’s point.

Of course shutter speed would matter in image acquisition. But like Gary notes, if you can do multiple acquisitions, and if you can read out the sensor and process the information fast enough, then you get the benefits of signal averaging and reduced motion artifacts- both in a given acquisition and to a lesser extent between acquisitions
  • A full resolution, much faster, and AI-driven version of Panasonic's 4K Defocus feature. Select the item you want on the touchscreen or via the joystick, have AI detect that, and then map distance in front and behind it. Plug in desired DOF (shallow or deep) and have it rapidly capture multiple exposures and subsequently bake the desired DOF into a RAW, JPEG, and/or HEIF file.
The ND/grad ND is a bonus, but that functionality can easily be replaced with an inexpensive filter.
A long exposure (vs a series of short exposures) has the disadvantage of increased correlated noise that may need LENR. Also, LiveND/frame-averaging allows fine-tuning the long exposure without swapping a series of ND filters. There are also advantages of ND filters.
Yes. Despite that, it would not be my first priority. And with item 1 (above), there would be a reduced impact of increased noise with signal averaging- including with a physical filter. So while I wouldn't say no to having the LiveND feature in a camera, its utility is of marginal value for me.
I think Olympus owners underestimate the usefulness of LiveND. Only Phase One has something similar and is a highly valued feature.
Funny- think you got that backwards. I think Olympus users overestimate the usefulness of LiveND.

I’m neither a fan nor a detractor. I see benefits for some, but not significant enough for me to get excited about it
The two options above would mitigate the need for a FF or larger sensor camera. It would, however, require a healthy processor and memory upgrade given the processing speed and storage space required to make the calculations.

Put these two features in a "smallish" rangefinder MFT camera and you've got a winner.
 
This would be my top two priorities:
  • Much faster equivalent to handheld hi-res to significantly improve SNR (and all the benefits that come with that) while decreasing motion artifacts, both in good and especially in low light. This has been discussed earlier in this thread.
Removing motion artifacts in HR modes decreases SNR where the artifacts initially occurred.
Was not referring to the utilization of software-driven motion artifact removal. Was referring to markedly increasing the rate of multiple image acquisition to decrease the necessity for such removal in the first place.
The selected shutter speed will always limit the rate of image acquisition.
Exactly, and I suspect too often folks new to multi-shot features like HHHR revert back to their old habits of way to slow of shutter speed to keep the ISO low in low light situations. In fact the gem of multiple image stacking is to allow for faster shutter speeds, even if high ISO is needed, and let the stacking take care of the noise. The faster shutter speeds also goes a long way to mitigate motion blur in the image.
FYi this response is primarily for SrMi, but including here since I agree with Gary’s point.

Of course shutter speed would matter in image acquisition. But like Gary notes, if you can do multiple acquisitions, and if you can read out the sensor and process the information fast enough, then you get the benefits of signal averaging and reduced motion artifacts- both in a given acquisition and to a lesser extent between acquisitions
Processing occurs after image acquisition and therefore should not have any influence on motion artifacts. Even with a global shutter sensor, motion artifacts can be an issue. On the other hand, why not accept motion artifacts as part of the image, rather than considering them "ugly?"
  • A full resolution, much faster, and AI-driven version of Panasonic's 4K Defocus feature. Select the item you want on the touchscreen or via the joystick, have AI detect that, and then map distance in front and behind it. Plug in desired DOF (shallow or deep) and have it rapidly capture multiple exposures and subsequently bake the desired DOF into a RAW, JPEG, and/or HEIF file.
The ND/grad ND is a bonus, but that functionality can easily be replaced with an inexpensive filter.
A long exposure (vs a series of short exposures) has the disadvantage of increased correlated noise that may need LENR. Also, LiveND/frame-averaging allows fine-tuning the long exposure without swapping a series of ND filters. There are also advantages of ND filters.
Yes. Despite that, it would not be my first priority. And with item 1 (above), there would be a reduced impact of increased noise with signal averaging- including with a physical filter. So while I wouldn't say no to having the LiveND feature in a camera, its utility is of marginal value for me.
I think Olympus owners underestimate the usefulness of LiveND. Only Phase One has something similar and is a highly valued feature.
Funny- think you got that backwards. I think Olympus users overestimate the usefulness of LiveND.

I’m neither a fan nor a detractor. I see benefits for some, but not significant enough for me to get excited about it
LiveND is the only way to extract the most DR from the camera (1/3 stops better than a7RV at base ISO, limited only by 12-bit raws in m43), while being free of motion artifacts. You can get deep, noise-free shadows and, if IBIS can handle it, also shoot handheld.
The two options above would mitigate the need for a FF or larger sensor camera. It would, however, require a healthy processor and memory upgrade given the processing speed and storage space required to make the calculations.

Put these two features in a "smallish" rangefinder MFT camera and you've got a winner.
 
after reading all the previous posts (for inspiration) my wishes would be:

- multi spot metering ala OM4. I had the OM4 a million years ago and used this all the time with K64 film

- Hand Held Starlight (if that's the correct name), multiple shots, high ISO, low noise, 20meg output at 14 bits. Actually thinking about this isn't this just HHHR with a smaller file.
Yes…. and no. The concept is essentially the same, multiple Hi ISO shots stacked in-camera to result in a noise free image. The difference is that with HH Starlight there is little or no ability to alter the camera selected shooting parameters… for instance the ISO used is entirely at the discretion of the algorithm in the camera.

But for a quick and dirty, almost point and shoot, solution to lowlight situations it is pretty darn handy with amazingly good results (JPEG only)
How does it fare when compared to a single raw shot with applied AI NR?
- ability to double up on the current offerings e.g, HiRes plus GND together

- output current CP functionality to RAW as well as JPEG
It has been some time since I compared the processed raw (Topaz NR) to the stacked JPEG, but gave up on the Raw as being more work to get even close to the JPEG. I routinely use JPEG’s and Raws interchangeably, so the JPEG’s were fine with me. I suspect if I were one that avoids anything JPEG than I most likely would never use this feature anyway.

I should note that I mostly use HH Starlight with cameras and lenses that don’t have as good of IBIS where one might be OK with handholding lowerISO/slower shutter speeds, like an OM 1 with a sync stabilized 100-400.
Adobe AI Denoise does not yet work on jpegs, maybe never will. It uses the full data of the raw and includes "raw detail". With jpegs being compressed, traditional noise reduction maybe as good as it gets.
 
This would be my top two priorities:
  • Much faster equivalent to handheld hi-res to significantly improve SNR (and all the benefits that come with that) while decreasing motion artifacts, both in good and especially in low light. This has been discussed earlier in this thread.
Removing motion artifacts in HR modes decreases SNR where the artifacts initially occurred.
Was not referring to the utilization of software-driven motion artifact removal. Was referring to markedly increasing the rate of multiple image acquisition to decrease the necessity for such removal in the first place.
The selected shutter speed will always limit the rate of image acquisition.
Exactly, and I suspect too often folks new to multi-shot features like HHHR revert back to their old habits of way to slow of shutter speed to keep the ISO low in low light situations. In fact the gem of multiple image stacking is to allow for faster shutter speeds, even if high ISO is needed, and let the stacking take care of the noise. The faster shutter speeds also goes a long way to mitigate motion blur in the image.
FYi this response is primarily for SrMi, but including here since I agree with Gary’s point.

Of course shutter speed would matter in image acquisition. But like Gary notes, if you can do multiple acquisitions, and if you can read out the sensor and process the information fast enough, then you get the benefits of signal averaging and reduced motion artifacts- both in a given acquisition and to a lesser extent between acquisitions
Processing occurs after image acquisition and therefore should not have any influence on motion artifacts. Even with a global shutter sensor, motion artifacts can be an issue. On the other hand, why not accept motion artifacts as part of the image, rather than considering them "ugly?"
"Motion" comes in two flavors, IMO; primarily subject motion, like moving foliage, and then camera motion, like I'm not a too steady octogenarian.

My experience is that as the AI internal processing gets better and better, the ability to mitigate or "mask" the subject motion is becoming less of an issue. However, even with strides in camera stabilization, handholding without camera motion remains my bigger concern. The process alignment pre-stack is pretty good if the camera movement is constrained to within the focus plane. Once the movement results in varying the subject distance, the algorithm doesn't have sufficient scaling facility to negate ghosting. It's getting better, or the IBIS is getting better, but it still is my biggest problem
  • A full resolution, much faster, and AI-driven version of Panasonic's 4K Defocus feature. Select the item you want on the touchscreen or via the joystick, have AI detect that, and then map distance in front and behind it. Plug in desired DOF (shallow or deep) and have it rapidly capture multiple exposures and subsequently bake the desired DOF into a RAW, JPEG, and/or HEIF file.
The ND/grad ND is a bonus, but that functionality can easily be replaced with an inexpensive filter.
A long exposure (vs a series of short exposures) has the disadvantage of increased correlated noise that may need LENR. Also, LiveND/frame-averaging allows fine-tuning the long exposure without swapping a series of ND filters. There are also advantages of ND filters.
Yes. Despite that, it would not be my first priority. And with item 1 (above), there would be a reduced impact of increased noise with signal averaging- including with a physical filter. So while I wouldn't say no to having the LiveND feature in a camera, its utility is of marginal value for me.
I think Olympus owners underestimate the usefulness of LiveND. Only Phase One has something similar and is a highly valued feature.
Funny- think you got that backwards. I think Olympus users overestimate the usefulness of LiveND.

I’m neither a fan nor a detractor. I see benefits for some, but not significant enough for me to get excited about it
LiveND is the only way to extract the most DR from the camera (1/3 stops better than a7RV at base ISO, limited only by 12-bit raws in m43), while being free of motion artifacts. You can get deep, noise-free shadows and, if IBIS can handle it, also shoot handheld.
The two options above would mitigate the need for a FF or larger sensor camera. It would, however, require a healthy processor and memory upgrade given the processing speed and storage space required to make the calculations.

Put these two features in a "smallish" rangefinder MFT camera and you've got a winner.
 
Last edited:
  • In camera panorama
  • High number of multiple exposures with the option to average exposure or combine (great for astro, see next wish)
  • Star tracking (though HHHR can align images (stars) to a small extent) but an actual tracking / sky rotate function would be better.
  • In-camera AI Noise Removal
  • Improved control of focus stacking range
  • Selective axes on in-camera defishing with multiple option of projection
  • A selection of frame styles
  • Antique image rendering effect by time period
  • In-camera B&W SikverFX type photo styles
Addendum
  • Hyperfocal focus. Select an aperture, and the camera automatically focuses at the hyperfocal distance to maximize DoF to a preset far distance.
--
Roger
 
Last edited:
Bring back the Hand Held Starlight mode with some improvements

Robin Wong has a new video on this feature

Allan
OM 5 (v1) has it. Scene mode on the dial > Nightscapes, and its in there.
Nightscapes (Night Scene) is not the same as Hand-Held Starlight which is what I was talking about.

The 1 series cameras do not have Scene modes, therefore, no Hand-Held Starlight which I think is a pity.

Robin Wong is talking about this mode and how it could be improved.

Allan
 
1) Handheld night (starlight) mode with output to raw and control over iso and shutter speed.
You're kidding, right? HH mode for starlight focus would need shutter speed times in excess of IBIS capacity. This is a contradictory request. Handheld relies on IBIS and once the timing lines cross, one can no longer handhold. So the “control over shutter speed” ask would select timing to greater than IBIS capacity.
But, it is Not for "starlight focus" - that is different.

Hand-Held Starlight is a completely different mode and is used for taking photos when it is dark.

`
2) Improved HDR mode with output to raw.
It’s already done. RTFM. RAW is not a format. It’s a digital file and each OEM turns it into a format. HDR is a format composited from a series of RAW files, multiple exposures averaged. That’s called exposure bracketing if you want multiple RAWs. The problem with keeping it “RAW” the way you propose is that you’re effectively roundtripping into the JPEG process to combine images, and then re-outputting to RAW. There’s a reason why this is not done in-camera as a RAW.
3) Improved HHHR with motion compensation.
Again, a contradiction. IBIS cannot control for subject motion. In theory an extremely advanced AI could do so, but that would be substituting pixels not in original capture, so fake data, not computational assist. You seem to fundamentally misunderstand “handheld”, and the basic laws of physics.
Hand-Held Starlight does this so, it may be possible.

`
4) A general frame averaging mode with output to raw and selection as to many frames are averaged, including frame alignment.
Hand-Held Starlight does this so, it may be possible.
More nonsense.
No, I do not think so, just possibilities. Check out the Hand-Held Starlight mode and see what can be done.

Allan
 
This really isn't a computational mode, but one I wish there was a more complex 'P' mode.

Now, in 'P' mode, the camera controls the three traditional corners of the exposure triangle:
  • The aperture that the lens is set to;
  • The shutter speed; (and)
  • The ISO used.
What I want is an option where I can control all 3 corners and which order the camera picks them.

For example, one time I might want to pick a shutter speed in a range as the most important item, the aperture range second, and the ISO third. A lot of times in this kind of mode, I want the fastest shutter speed and there is a setting for the minimum shutter speed to use in auto modes. But I may intentionally want to use a slower shutter speed to show movement.

Another time, I might be shooting for a given depth of field and aperture is the most important selection criteria, but at the same time, I might need the shutter speed to be within a range. If I wanted a single aperture, I could shoot in aperture priority mode, but if say f/4 - f/7 was acceptable, there isn't an auto mode to select between these aperatures.

True if I am setting up the shot, I can have time to select all three things. But at events, where I'm looking to capture the decisive moment (DM), I don't have time to iterate in the options before the DM goes away. I've been in events that have both indoor parts and outdoor parts, and I prefer not to have to keep changing things as I move within the event.
 
This would be my top two priorities:
  • Much faster equivalent to handheld hi-res to significantly improve SNR (and all the benefits that come with that) while decreasing motion artifacts, both in good and especially in low light. This has been discussed earlier in this thread.
Removing motion artifacts in HR modes decreases SNR where the artifacts initially occurred.
Was not referring to the utilization of software-driven motion artifact removal. Was referring to markedly increasing the rate of multiple image acquisition to decrease the necessity for such removal in the first place.
The selected shutter speed will always limit the rate of image acquisition.
Exactly, and I suspect too often folks new to multi-shot features like HHHR revert back to their old habits of way to slow of shutter speed to keep the ISO low in low light situations. In fact the gem of multiple image stacking is to allow for faster shutter speeds, even if high ISO is needed, and let the stacking take care of the noise. The faster shutter speeds also goes a long way to mitigate motion blur in the image.
FYi this response is primarily for SrMi, but including here since I agree with Gary’s point.

Of course shutter speed would matter in image acquisition. But like Gary notes, if you can do multiple acquisitions, and if you can read out the sensor and process the information fast enough, then you get the benefits of signal averaging and reduced motion artifacts- both in a given acquisition and to a lesser extent between acquisitions
Processing occurs after image acquisition and therefore should not have any influence on motion artifacts. Even with a global shutter sensor, motion artifacts can be an issue. On the other hand, why not accept motion artifacts as part of the image, rather than considering them "ugly?"
"Motion" comes in two flavors, IMO; primarily subject motion, like moving foliage, and then camera motion, like I'm not a too steady octogenarian.

My experience is that as the AI internal processing gets better and better, the ability to mitigate or "mask" the subject motion is becoming less of an issue. However, even with strides in camera stabilization, handholding without camera motion remains my bigger concern. The process alignment pre-stack is pretty good if the camera movement is constrained to within the focus plane. Once the movement results in varying the subject distance, the algorithm doesn't have sufficient scaling facility to negate ghosting. It's getting better, or the IBIS is getting better, but it still is my biggest problem
Gary gets it. Definitely an area for CP-derived improvement.
  • A full resolution, much faster, and AI-driven version of Panasonic's 4K Defocus feature. Select the item you want on the touchscreen or via the joystick, have AI detect that, and then map distance in front and behind it. Plug in desired DOF (shallow or deep) and have it rapidly capture multiple exposures and subsequently bake the desired DOF into a RAW, JPEG, and/or HEIF file.
The ND/grad ND is a bonus, but that functionality can easily be replaced with an inexpensive filter.
A long exposure (vs a series of short exposures) has the disadvantage of increased correlated noise that may need LENR. Also, LiveND/frame-averaging allows fine-tuning the long exposure without swapping a series of ND filters. There are also advantages of ND filters.
Yes. Despite that, it would not be my first priority. And with item 1 (above), there would be a reduced impact of increased noise with signal averaging- including with a physical filter. So while I wouldn't say no to having the LiveND feature in a camera, its utility is of marginal value for me.
I think Olympus owners underestimate the usefulness of LiveND. Only Phase One has something similar and is a highly valued feature.
Funny- think you got that backwards. I think Olympus users overestimate the usefulness of LiveND.

I’m neither a fan nor a detractor. I see benefits for some, but not significant enough for me to get excited about it
LiveND is the only way to extract the most DR from the camera (1/3 stops better than a7RV at base ISO, limited only by 12-bit raws in m43), while being free of motion artifacts. You can get deep, noise-free shadows and, if IBIS can handle it, also shoot handheld.
Doesn't sound plausible that you can't get this DR with a physical ND filter and multiple signal acquisitions (e.g. the proposed HHHR modifications we're discussing).
The two options above would mitigate the need for a FF or larger sensor camera. It would, however, require a healthy processor and memory upgrade given the processing speed and storage space required to make the calculations.

Put these two features in a "smallish" rangefinder MFT camera and you've got a winner.
 
LiveND is the only way to extract the most DR from the camera (1/3 stops better than a7RV at base ISO, limited only by 12-bit raws in m43), while being free of motion artifacts. You can get deep, noise-free shadows and, if IBIS can handle it, also shoot handheld.
Doesn't sound plausible that you can't get this DR with a physical ND filter and multiple signal acquisitions (e.g. the proposed HHHR modifications we're discussing).
LiveND can average up to 128 images, significantly more than HR can. The measured PDR difference between HR and LiveND64 ranges only from 0.5 to 1 stop, primarily due to the 12-bit format's limitations on LiveND.

Of course, LiveND does not have motion artifact issues.
 
<snip>

LiveND is the only way to extract the most DR from the camera (1/3 stops better than a7RV at base ISO, limited only by 12-bit raws in m43), while being free of motion artifacts. You can get deep, noise-free shadows and, if IBIS can handle it, also shoot handheld.
Doesn't sound plausible that you can't get this DR with a physical ND filter and multiple signal acquisitions (e.g. the proposed HHHR modifications we're discussing).
LiveND can average up to 128 images, significantly more than HR can. The measured PDR difference between HR and LiveND64 ranges only from 0.5 to 1 stop, primarily due to the 12-bit format's limitations on LiveND.

Of course, LiveND does not have motion artifact issues.
Great. Then no reason software shouldn't be able to do 128 signal averages without the digital ND part.
 

Keyboard shortcuts

Back
Top