shotting at lower resolution

doremon313

Active member
Messages
61
Reaction score
0
Location
anaheim, US
Does anyone know if shooting at lower resolution create better result? Does the camera combine the pixels in the sensor to make a less dense sensor therefore better quality photo or does shooting at low res have no effect at all?
--
San
http://www.sanweng.com
 
Does anyone know if shooting at lower resolution create better result? Does the camera combine the pixels in the sensor to make a less dense sensor therefore better quality photo or does shooting at low res have no effect at all?
It depends on your exact meaning of 'better' - but the short answer is really, 'No' it doesn't.

In general, shooting at lower resolutions than the maximum sensor resolution is exactly the same as resizing (re-sampling) an image downward in image editing software on a PC.

The only benefit to 'resizing (re-sampling)' in the camera, is a smaller file size so more images can be stored on your memory card - otherwise the images might as well be shot at full resolution and resized later on a PC.

Resizing an image downward could make an image 'appear better' due to the averaging out of noise, but of course image resolution/detail is lost at the same time - so whether you consider it to be 'better' just depends on your requirements, i.e. for either 'maximum detail' or 'less noise but less detail'.

However, there are exceptions - for example, the Canon G11's 'Low Light Mode' actually combines the signal from 4 pixels into 1 output value, on the sensor, thereby increasing the effective ISO sensitivity by x4, but of course reducing the resolution by 1/4 (from 10MP to 2.5MP).

But again, the only difference here is that the resizing is being done on the sensor, to achieve higher sensitivity (or lower noise than same full resolution sensitivity) - but the same compromise is being made, of giving up resolution/detail in exchange for higher sensitivity/lower noise.

Incidentally, the reduced resolution of this special 'Low Light Mode' has a side benefit of shorter processing time, which allows a faster 'frames per second' rate, in this 'Low Light - x4 sensitivity, 2.5MP mode'.
 
However, there are exceptions - for example, the Canon G11's 'Low Light Mode' actually combines the signal from 4 pixels into 1 output value, on the sensor, thereby increasing the effective ISO sensitivity by x4, but of course reducing the resolution by 1/4 (from 10MP to 2.5MP).
This does not increase sensitivity at all. It does have the potential to give a slight reduction in noise because you pay the noise penalty from the amplifier, A/D converter, etc. just once.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Ken Rockwell swears by it. I can't find the article on his site.
I believe that the G11 uses it (14 mpx back to 10 mpx).

I've used the 6 mpx M1 on the Canon G 11, and the 8mpx on the olympus 620 dSlr and didn't notice much difference.

I like Rockwell's Stuff, and I feel if he says it, it's worth a try.

Vjim
 
Ken Rockwell swears by it.
Ugh... I suspect he does this as part of (what I perceive to be) his campaign to willfully fail to understand why higher resolution files are useful.
I believe that the G11 uses it (14 mpx back to 10 mpx).
The G11 has a 10MP sensor. Nothing is ever 14MP on the G11.
I've used the 6 mpx M1 on the Canon G 11, and the 8mpx on the olympus 620 dSlr and didn't notice much difference.
On nearly all cameras, the lower resolution modes just read out the sensor at full resolution and downsample the resulting image, offering no advantage other than a smaller file size.

The G11 and S90 are rare in that they might actually do some binning on the sensor in low light mode.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
On nearly all cameras, the lower resolution modes just read out the sensor at full resolution and downsample the resulting image, offering no advantage other than a smaller file size.
I guess it depends on some factors.

For example, if the lens is bad, you don't get any extra detail anyway at full resolution. That's especially true at the long end of the huge tele cameras.

Also, it depends on when the downsampling is done. Is it done before or after NR? Is there less NR applied if the resolution is smaller? How about the sharpening?
 
However, there are exceptions - for example, the Canon G11's 'Low Light Mode' actually combines the signal from 4 pixels into 1 output value, on the sensor, thereby increasing the effective ISO sensitivity by x4, but of course reducing the resolution by 1/4 (from 10MP to 2.5MP).
This does not increase sensitivity at all. It does have the potential to give a slight reduction in noise because you pay the noise penalty from the amplifier, A/D converter, etc. just once.
I disagree - the mechanism of combining (summing) 4 pixels' signals into 1 sensor output signal is primarily/fundamentally about increasing the signal level presented to the A/D, Digic, etc, by a factor of 4x - the noise benefits, whilst an important factor, are actually only a secondary issue.

Regarding the actual term - of 'sensitivity' - I can see that it can be argued two different ways depending upon the particular context/definition chosen...

1. Summing the signals of 4 pixels into 1 sensor output signal value, clearly produces a signal presented to the A/D, the Digic, the photographer, of 4x the signal level for a given exposure - therefore it facilitates shooting at 4x lower light levels whilst still presenting the same signal level (same as 4x more light) to the cameras electronics for processing (albeit at reduced image resolution).

As far as the camera's signal processing, and the photographer, sees it - this 4x increased signal output from the sensor (per 'agregated pixel') is effectively increased sensitivity, and hence the G11's 'Low Light Mode' is designated the ISO range ISO320 to ISO12800 - exactly 4x the regular range of ISO80 to ISO3200.

2. By another definition, since the sensor's 'signal output per unit area' is still always the same (i.e. in Volts/Metre^2) for any given exposure - by this definition the 'sensitivity' is not increased at all.

However, whilst the later is 'technicaly correct' 'by its own definition' - it nevertheless does not equate to the real and tangible effect seen at the camera's signal processing/exposure control input, or to that experienced by the photographer, who actually do experience a real 4x increase in sensitivity, 4x ISO range increase (repeat, albeit at a reduced image resolution).
 
This does not increase sensitivity at all. It does have the potential to give a slight reduction in noise because you pay the noise penalty from the amplifier, A/D converter, etc. just once.
I disagree - the mechanism of combining (summing) 4 pixels' signals into 1 sensor output signal is primarily/fundamentally about increasing the signal level presented to the A/D, Digic, etc, by a factor of 4x - the noise benefits, whilst an important factor, are actually only a secondary issue.
The sensitivity of the sensor does not change when you bin pixels on chip. What changes is a slight increase in SNR because you've reduced one source of noise relative to the signal.

Let's say you bin two pixels off chip. When you do this, your SNR will look like:

(signal1 + signal2) (readout_noise1 + readout_noise2 + per_area_noise1 + per_area_noise2 + pixel_noise1 + pixel_noise2)

where:
  • readout_noise includes things like A/D converter noise and amplifier noise
  • pixel_noise includes things reset noise and charge transfer losses
  • per_area_noise includes things like photon shot noise and dark current noise
Now, let's say you bin off chip. When you do this your SNR will look like:

(signal1 + signal2) (readout_noise + per_area_noise1 + per_area_noise2 + pixel_noise1 + pixel_noise2)

The difference here is that the readout_noise is now paid once instead of twice, so SNR increases. Any apparent increase in sensitivity is due to this.
1. Summing the signals of 4 pixels into 1 sensor output signal value, clearly produces a signal presented to the A/D, the Digic, the photographer, of 4x the signal level for a given exposure - therefore it facilitates shooting at 4x lower light levels whilst still presenting the same signal level (same as 4x more light) to the cameras electronics for processing (albeit at reduced image resolution).

As far as the camera's signal processing, and the photographer, sees it - this 4x increased signal output from the sensor (per 'agregated pixel') is effectively increased sensitivity, and hence the G11's 'Low Light Mode' is designated the ISO range ISO320 to ISO12800 - exactly 4x the regular range of ISO80 to ISO3200.
I can see how one might think that increasing the effective pixel sizes increases the ISO but it doesn't. If you replaced each block of 4 pixels with a single, larger pixel (and, of course, changed the filter patter appropriately), you'd cut the resolution by 1/4 but you would not change the base ISO of the sensor significantly.

Actually, there would indeed be small changes resulting from factors such as changes in the light gathering efficiency due to the improved microlens efficiency and other efficiencies gained by having larger pixels, but the change would be tiny compared to the change in area. Think about this: A Nikon D5000 has pixels with over 8x the area as the G10's pixels, but the base ISO goes up but just a little over 1 stop, not over 3 stops.
2. By another definition, since the sensor's 'signal output per unit area' is still always the same (i.e. in Volts/Metre^2) for any given exposure - by this definition the 'sensitivity' is not increased at all.

However, whilst the later is 'technicaly correct' 'by its own definition' - it nevertheless does not equate to the real and tangible effect seen at the camera's signal processing/exposure control input, or to that experienced by the photographer, who actually do experience a real 4x increase in sensitivity, 4x ISO range increase (repeat, albeit at a reduced image resolution).
Keep in mind that the sensitivity of the sensor does not change with ISO; it is constant.

You might think: If the pixels are binned, then I'm getting 4X the signal off the sensor, so I need less amplification and the ISO has therefore increased. Keep in mind, however, that ISO is not tied to specific signal levels. At any given ISO a large sensor like that in a digital SLR or digital medium format camera will have many times larger signal coming off the chip, but this doesn't mean that that the large sensor actually has a higher ISO; it just means that it will have higher SNR at any given ISO.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
Ken Rockwell swears by it. I can't find the article on his site.
I believe that the G11 uses it (14 mpx back to 10 mpx).
No, the G11 only has a 10 MPx sensor - your muddling things by thinking about the previous G10 model which has 14 Mpx.

The G10 with it's 14 MPx definately 'out-resolves' (produces more detail than) the G11 with it's 10 MPx - but that's not the issue being discussed here.
I've used the 6 mpx M1 on the Canon G 11, and the 8mpx on the olympus 620 dSlr and didn't notice much difference.

I like Rockwell's Stuff, and I feel if he says it, it's worth a try.
The lower resolution output setting of any given camera cannot produce 'technically better' images , that's absolutely impossible - although they might look 'subjectively better' due to some 'noise agregation/averaging' and the fact that the full size image will, when viewed at 100%, always look softer by comparison due to the fact that it is made up of 2/3rds interpolated pixel data.

Because of the later point (interpolated data), it's definitely 'possible' that the 'first resolution setting lower than maximum' may not be significantly different to the 'full resolution setting'...

Remember that a 10 Mpx camera sensor is actually only a '2.5 MPx RED sensor' interleaved with a '5 MPx GREEN sensor' and a '2.5 MPx BLUE sensor'.

So although the camera outputs a 'full colour' 10 MPx image - 67% (two thirds) of the data output from the camera is 'filled in' (essentially 'fabricated' or 'made up') by interploation.

It's not too difficult to conceive/appreciate, that since the highest resolving colour channel (Green) is only ever 'half' the full sensor pixel count - then there probably isn't likely to be that much more real 'full colour' information than this amount in the image anyway.

It's easy to test the theory - re the G11 - just take any full 10 MPx image - resize (high quality resample) it down to 6 MPx dimensions in an image editor like Photoshop, then resample (resize) it back up to 10 MPx from the 6MPx and compare the 'downed> back-upped' 10 MPx copy with the original 10 Mpx copy and look at the difference(s).

I know I've tried this many years ago with my first 5 MPx camera - and it was pretty difficult to see differences.
 
It's not too difficult to conceive/appreciate, that since the highest resolving colour channel (Green) is only ever 'half' the full sensor pixel count - then there probably isn't likely to be that much more real 'full colour' information than this amount in the image anyway.

It's easy to test the theory - re the G11 - just take any full 10 MPx image - resize (high quality resample) it down to 6 MPx dimensions in an image editor like Photoshop, then resample (resize) it back up to 10 MPx from the 6MPx and compare the 'downed> back-upped' 10 MPx copy with the original 10 Mpx copy and look at the difference(s).

I know I've tried this many years ago with my first 5 MPx camera - and it was pretty difficult to see differences.
I remember KR making similar arguments and I also remember some examples people have posted where it's hard to tell the difference. There are also examples the difference is quite noticeable. I grabbed a crop of the G11 resolution test image from dpreview's G11 test and did the test. The leftmost image is original, the middle is downampled and then upsampled using photoshop's bicubic resampling, and the rightmost one is downsample and then upsampled using "bicubic sharper":



I'll note that resolution charts are, in some sense, the best case for Bayer interpolation algorithms because they are more or less optimized for these sorts of tests. Still, it would nice to retain sharp detail like that when possible.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
The sensitivity of the sensor does not change when you bin pixels on chip....
Yes, as I clearly said in my preceding post - the 'surface sensitivity' (i.e. Volts/Mertre^2) obviously does not change - it's obviously a constant.

But equally obvious is - that the bigger the pixel size, the bigger the potential signal size per pixel. 'Binning' several pixels' signals together produces a bigger signal, directly in proportion to the number of pixels binned together.
....What changes is a slight increase in SNR because you've reduced one source of noise relative to the signal.
That is NOT the primary reason/motivation. The binning of 4 pixels together is primarily done to get a fuller signal level to the signal processing inputs (A/D, Digic etc) , whilst only exposing with 25% of the 'normal' light level.

If the signal was not brought up by the 4x factor via binning, then at least 2 bits of the A/D would be unused, with significant detriment to the image processing quality

I fully understand the effects on noise of 'binning' are also beneficial (statistical combining of random noise, and only one 'readout noise' contribution) but this is only a 'secondary' consideration, after the actual 'signal level'.
....The difference here is that the readout noise is now paid once instead of twice, so SNR increases
The SNR benefits, because the 'signal' components combine 'additively', where as the 'random noise' components combine 'statistically' by 'the square root of the sum of the squares' plus some benefit due to only having one 'read noise' contribution (although the pixel is now 4 x the size, so the read noise is actually 4x bigger 'spatially').
....Any apparent increase in sensitivity is due to this.
No - 'sensitivity' here, is all about increasing the 'signal level', in the context of ' low light and lack of signal level ' - and trying to make fullest use of the A/D's 'bit range', beyond the point where the available 'amplifier gains’ have reached their maximum.
I can see how one might think that increasing the effective pixel sizes increases the ISO but it doesn't.
Yes, it does.
If you replaced each block of 4 pixels with a single larger pixel … … you would not change the base ISO of the sensor significantly.
You are failing to make any differentiation between, a camera system's 'sensor derived base ISO ' and any wider 'camera system ISO'.

The sensor's 'exposure saturation point' is the main determinant of the camera system's 'base ISO'.

Larger pixels do produce higher signals (that's indisputable) - but yes, they will still 'saturate' at the same exposure level regardless of their size.

Suddenly increasing a camera's 'effective pixel size' at 'base ISO', whilst keeping all other design parameters constant would certainly be a problem, because the signal per 'effective pixel' would increase and the 'saturation point' would then 'effectively' be moved lower relative to the increased signal level (result - a lower highlight clipping point).

However, at 'higher than base ISO' exposure settings, it's quite safe to 'increase the effective pixel size', therefore 'increasing the signal size per effective pixel', because at 'higher ISO exposure values', the sensor is no longer being exposed near its saturation point.

As long as the sensor is being exposed well clear of it's saturation point - binning pixels increases the signal level for any given exposure level - and 'increased signal level for the same exposure level' is the very definition of increased system ISO .
...Think about this: A Nikon D5000 has pixels with over 8x the area as the G10's pixels, but the base ISO goes up but just a little over 1 stop, not over 3 stops.
That isn't relevant to the argument, re 'increased ISO' via 'pixel binning' in the G11's 'Low Light Mode' - we are not debating the issue of a camera’s/sensor's 'base ISO'.
2. By another definition, since the sensor's 'signal output per unit area' is still always the same (i.e. in Volts/Metre^2) for any given exposure - by this definition the 'sensitivity' is not increased at all.
Keep in mind that the sensitivity of the sensor does not change with ISO; it is constant.
Yes, that is exactly what I'd just said.
You might think: If the pixels are binned, then I'm getting 4X the signal off the sensor, so I need less amplification and the ISO has therefore increased.
Yes, that's exactly right - 4x the signal, for the same exposure level - by any definition, the camera system's ISO has increased . The 'amplification' isn't even relevant to the point - it's simply that the signal is 4x higher for the same exposure value, so the effective system ISO is 4x higher - it's that simple.
Keep in mind, however, that ISO is not tied to specific signal levels. At any given ISO a large sensor like that in a digital SLR or digital medium format camera will have many times larger signal coming off the chip, but this doesn't mean that that the large sensor actually has a higher ISO; it just means that it will have higher SNR at any given ISO.
Absolutely - I've said as much myself.

The point of debate doesn't relate to any 'absolute' signal level - it's a matter of 'relative signal levels' - and clearly, if a 'sensor mode' (G11 'Low Light Mode') can be made to output the same signal level at just 25% of the exposure level of another 'standard mode', then the 'Low Light Mode' is effectively 4x the ISO of the 'standard mode'.

One last reiteration - SNR is not directly relevant to the argument - it's entirely about 'relative signal levels v exposure level'.
 
But equally obvious is - that the bigger the pixel size, the bigger the potential signal size per pixel. 'Binning' several pixels' signals together produces a bigger signal, directly in proportion to the number of pixels binned together.
Per logical pixel, yes, but this is also true if you bin in software after the fact.
That is NOT the primary reason/motivation. The binning of 4 pixels together is primarily done to get a fuller signal level to the signal processing inputs (A/D, Digic etc) , whilst only exposing with 25% of the 'normal' light level.

If the signal was not brought up by the 4x factor via binning, then at least 2 bits of the A/D would be unused, with significant detriment to the image processing quality
It sounds like you're forgetting about the analog, variable gain amplification that occurs before things hit the A/D converter.
I fully understand the effects on noise of 'binning' are also beneficial (statistical combining of random noise, and only one 'readout noise' contribution) but this is only a 'secondary' consideration, after the actual 'signal level'.
You mention two things above. The first is the the purported benefit in reduction in random noise from binning, but there is no benefit in doing this on chip. The effect is identical if you do it after readout. Since there is no difference in reduction of random noise for binning on chip vs. readout, the only advantage can be in A/D converter and amplifier noise.

As I argued earlier, the main advantage is that you pay this price once. However, I would agree that there are also advantages to dealing with a larger signal to start with: You will experience less amplifier and A/D converter noise if you are feeding them a larger signal.
The SNR benefits, because the 'signal' components combine 'additively', where as the 'random noise' components combine 'statistically' by 'the square root of the sum of the squares' plus some benefit due to only having one 'read noise' contribution (although the pixel is now 4 x the size, so the read noise is actually 4x bigger 'spatially').
I don't know what you mean here with terms like random noise and statistically in scare quotes, but the breakdown I gave in my previous response is correct. Compared to off chip reduction, the only things that change are the A/D converter and amplifier noise.
I can see how one might think that increasing the effective pixel sizes increases the ISO but it doesn't.
Yes, it does.
No. It decreases one of several noise sources which may increase your tolerance for high ISO images because SNR is higher.
You are failing to make any differentiation between, a camera system's 'sensor derived base ISO ' and any wider 'camera system ISO'.
I raised the issue to point out that absolute signal level and ISO value are decoupled. As you move away from the base ISO, SNR and not saturation is the limiting factor. In neither case is absolute signal the determining factor.
Suddenly increasing a camera's 'effective pixel size' at 'base ISO', whilst keeping all other design parameters constant would certainly be a problem, because the signal per 'effective pixel' would increase and the 'saturation point' would then 'effectively' be moved lower relative to the increased signal level (result - a lower highlight clipping point).
The only thing that changes here is the amount of amplification needed for the A/D converter.
As long as the sensor is being exposed well clear of it's saturation point - binning pixels increases the signal level for any given exposure level - and 'increased signal level for the same exposure level' is the very definition of increased system ISO .
Actually, this is not at all the definition of increased ISO. Have you read ISO 12232:1998?
That isn't relevant to the argument, re 'increased ISO' via 'pixel binning' in the G11's 'Low Light Mode' - we are not debating the issue of a camera’s/sensor's 'base ISO'.
The issue is the mistaken impression that ISO is directly connected to the magnitude of the signal coming off the chip. That's a false assumption and my example was intended to point this out.
Yes, that's exactly right - 4x the signal, for the same exposure level - by any definition, the camera system's ISO has increased . The 'amplification' isn't even relevant to the point - it's simply that the signal is 4x higher for the same exposure value, so the effective system ISO is 4x higher - it's that simple.
By any definition except, perhaps, the official one for digital ISO? Cameras offer a range of ISO values but changing the ISO value does not change the signal coming off the chip. Well, actually, it does because you wind up underexposing, so doubling the ISO and compensating by halving the shutter speed has the effect of halving the signal coming off the chip. Note that this is the exact opposite of what you are claiming.

Of course, I'm not claiming that higher ISO necessarily means less signal coming off the chip; I'm pointing out that these two concepts are decoupled.
The point of debate doesn't relate to any 'absolute' signal level - it's a matter of 'relative signal levels' - and clearly, if a 'sensor mode' (G11 'Low Light Mode') can be made to output the same signal level at just 25% of the exposure level of another 'standard mode', then the 'Low Light Mode' is effectively 4x the ISO of the 'standard mode'.
This is clearly false since the signal level coming off the chip is not directly related to ISO.
One last reiteration - SNR is not directly relevant to the argument - it's entirely about 'relative signal levels v exposure level'.
SNR is the only thing that is relevant to the discussion since we're outside the realm of saturation limited ISO and, by the definition of ISO , we're required to consider SNR.

--
Ron Parr
 
Just as increasing amplifier gains increases the signal to the A/D - 'binning pixels' also increases the signal to the A/D - Both methods increase the signal to the A/D - so both methods are providing a higher ISO setting .

No I havn't read 'ISO 12232:1998' - but I do understand that the real world ISO definition is wide open to the design decisions of the camera manufacturers and their exposure algorithms.

Again, I'm not discussing any sort of 'absolute ISO' definition - I'm only discussing relative ISO settings within any one camera, as experienced and made available to the camera operator to actually use.

Now - don't tell me that ISO320-ISO12800 isn't actually there on the G11 in it's 'Low Light Mode' - it isn't a figment of Canon's or my imagination you know!
Ron Parr wrote:

It sounds like you're forgetting about the analog, variable gain amplification that occurs before things hit the A/D converter.
No - I'm absolutely not forgetting about 'variable gain amplification'...

In general - as ISO settings on a camera are increased, the programmable gain amplifiers' gains are increased, in order to compensate for the decrease in signal read off the sensor as the either the exposure settings, or the light level, change.

The aim is keep the signal level fed to the A/D converter to broadly the same coverage/utilisation of the A/D converters range - as the exposure settings, and/or light level, change.

Of course, cameras have an upper limit of maximum amplifier gain.

Once the maximum amplifier gain ISO setting is reached, some cameras extend the effective ISO settings available by simply 'numericaly multiplying' the lower RAW A/D values upward by a 'numeric factor' before further processing to JPEG, or outputing as RAW.

Operationaly, this 'extended ISO' allows the photographer to easily meter and capture at higher ISO exposure settings - but it has the disadvantage that only a decreasing range, and therefore decreasing resolution, of the A/D is used.

[Incidentally - my first digital camera acheived all its ISO settings at every level from from ISO100 to ISO800 entirely by numerical multiplication of its 13 bit A/D data - but that's a whole other story.]

By 'binning pixels' in the way the G11's 'Low Light Mode does - this overcomes the disadvantages of only utilising a decreased range of the A/D, when 'extending' the ISO.

Additional amplifier gain is effectively substituted for/replaced by increasing the signal size to the A/D by 'binning' the pixel signals 4 into 1, increasing signal size by 'summing/addition' instead of by amplification - thereby maintaining better utilisation of the A/D range/resolution.

Repeat - Just as increasing amplifier gains increases the signal to the A/D - 'binning pixels' also increases the signal to the A/D - Both methods increase the signal to the A/D - so both methods are providing a higher ISO setting .
 
As detailed in this thread:
http://forums.dpreview.com/forums/read.asp?forum=1000&message=23352171

You get very little gain in S/N ratio in return for squishing effective resolution down to 1.11 megapixels. (You DO notice that the half-resolution Low Light mode images are blocky and artifacted at 100%? Reduce them to 1216x912 and they look better...)
No, in my experience I don't believe you see anything 'blocky' from the G11's reduced resolution 'Low Light Mode' - if you have actually seen 'blocky' samples from the G11, I'd be very interested to see them.

I've occasionaly seen a little 'jagginess' on some diagonals in the 'Low Light Mode' output - this may either be an artifact that arises from the fact that the 'anti-alias filter' isn't going to be ideally matched to the 'effective' coarser resolution of the 'binned pixels' - or maybe because the 'bayer demosaic interploation' used isn't ideally optimised to the effect of the 'binning'.

Question - where do get your assertion from, that the resultant "resolution is 1/9th, not 1/4th" ? I don't see anything in that thread that concludes as such.

I've read the key parts of that thread fairly carefully, and I am of the view that it is flawed and misleading in some parts of its basic assertions.

Regardless of the theory and/or speculation - I find the output of the G11's 'Low Light Mode' to be quite 'acceptable/tolerable' - taking into account that you are generally using this special mode at what should be considered 'extreme ISO' settings for such a samall sensor size.

As the saying goes - 'The proof of the pudding, is in the eating' - and I would recommend people try the feature out experimentally, before dismissing it out of hand.
 
As detailed in the excerpt from the Kodak datasheet, two pixel rows spaced one row apart are first combined together (because they have the same colour layout), and then two pixels spaced one pixel apart on the combined row are combined together to form one superpixel (because they have the same colour).

Given the above, you have now combined four pixels each spaced one pixel apart--ie on the corners of a 3x3 pixel square--into one.

Since each superpixel consists of pixels in a 3x3 pixel square added together, it cannot resolve any detail smaller than that captured by a 3x3 square, ie dividing resolution by 3 in each spatial direction, dividing 2D resolution by 9.

This is despite the fact that the total number of superpixels is only divided by 4. Although there are 1/4 the total number of superpixels, they just (partially) overlap each other in spatial location and all sample fuzzy 3x3 pixel areas. Even if you had a full 10 million of such superpixels, you would still end up with only 1.11 megapixels of real resolution.
As detailed in this thread:
http://forums.dpreview.com/forums/read.asp?forum=1000&message=23352171

You get very little gain in S/N ratio in return for squishing effective resolution down to 1.11 megapixels. (You DO notice that the half-resolution Low Light mode images are blocky and artifacted at 100%? Reduce them to 1216x912 and they look better...)
No, in my experience I don't believe you see anything 'blocky' from the G11's reduced resolution 'Low Light Mode' - if you have actually seen 'blocky' samples from the G11, I'd be very interested to see them.

I've occasionaly seen a little 'jagginess' on some diagonals in the 'Low Light Mode' output - this may either be an artifact that arises from the fact that the 'anti-alias filter' isn't going to be ideally matched to the 'effective' coarser resolution of the 'binned pixels' - or maybe because the 'bayer demosaic interploation' used isn't ideally optimised to the effect of the 'binning'.

Question - where do get your assertion from, that the resultant "resolution is 1/9th, not 1/4th" ? I don't see anything in that thread that concludes as such.

I've read the key parts of that thread fairly carefully, and I am of the view that it is flawed and misleading in some parts of its basic assertions.

Regardless of the theory and/or speculation - I find the output of the G11's 'Low Light Mode' to be quite 'acceptable/tolerable' - taking into account that you are generally using this special mode at what should be considered 'extreme ISO' settings for such a samall sensor size.

As the saying goes - 'The proof of the pudding, is in the eating' - and I would recommend people try the feature out experimentally, before dismissing it out of hand.
 
I shot these in good light to eliminate the effect of noise and NR:

Low Light mode: 1/500, f/6.3, ISO 640



(full size:



;)

Manual mode, full image size: 1/500, f/6.3, ISO 640 (ie same exposure settings)



(full size:



;)

I resized the manual mode image to 50% (1/4 size) (



;)

and 33% (1/9 size) (



;) for comparison.

Some 100% crops:
Low Light at full res vs normal mode at 50%:



The Low Light mode image is clearly 'blocky and artifacted' in comparison.

Low Light mode reduced to 1216x912 vs normal mode at 33% (ie same magnification, at the resolution where I predicted where Low Light mode would have full resolution):



Now we're getting closer, but the lines (eg the outlines of air conditioners) in Low Light mode are still noticeably less well defined than in the downsampled normal image. This can be for any number of reasons: resampling losses from outputting to 50% in-camera then downsampling to 33%, artifacts from the imperfect binned Bayer array (as I mentioned in the thread I linked to the central locations of the binned superpixels are in uneven clusters rather than spread evenly on the sensor), but most importantly the fact that the binned sensor may only have 1/9th Bayer resolution, which is not the same as full 1/9th resolution for all 3 colours. On the other hand, since the normal mode image has been heavily downsampled, any Bayer discount has long since gone out of the picture and the downsampled image may be treated as having full resolution for all 3 colours.

In comparisons between Bayer and Foveon cameras (with full colour sensing at each pixel) it was found that Foveon cameras roughly matched Bayer cameras with double the pixel count. So for a more fair comparison, I halved and redoubled the resolution of the comparison pairs (ie image size down to 70.7% then resampled up to original size):

Low Light mode at full res vs normal mode at 50% (both resized 70.7% (1/8th size) then resampled back to original size):



The normal mode image still looks sharper IMHO--of course, since we're effectively looking at a 1/8th comparison blown up, if you agreed that the 1/9th comparison above put the normal mode image ahead, you'll agree that the Low Light mode image has no more chance here either.

Low Light mode reduced to 1216x912 vs normal mode at 33% (both resized 70.7% then resampled back to original size):



Now these are comparable for all purposes.

So, if you agree with my evaluation of the last two pictures, this puts the Low Light mode resolution squarely at 1/9th Bayer resolution--which is less resolution than a normal mode image downsampled to 1/9th, and certainly less than 1/4th resolution.
 
No - I'm absolutely not forgetting about 'variable gain amplification'...

In general - as ISO settings on a camera are increased, the programmable gain amplifiers' gains are increased, in order to compensate for the decrease in signal read off the sensor as the either the exposure settings, or the light level, change.

The aim is keep the signal level fed to the A/D converter to broadly the same coverage/utilisation of the A/D converters range - as the exposure settings, and/or light level, change.

Of course, cameras have an upper limit of maximum amplifier gain.

Once the maximum amplifier gain ISO setting is reached, some cameras extend the effective ISO settings available by simply 'numericaly multiplying' the lower RAW A/D values upward by a 'numeric factor' before further processing to JPEG, or outputing as RAW.

Operationaly, this 'extended ISO' allows the photographer to easily meter and capture at higher ISO exposure settings - but it has the disadvantage that only a decreasing range, and therefore decreasing resolution, of the A/D is used.
You ignore the fact that by the time you hit the top 'real' ISO, the lower several bits of the ADC are already swamped in noise under the sensor's noise floor. As you further up the ISO in software (ie underexpose), you just lose further bits from the top until all you have left are bits under the noise floor. Bit-shifting after the fact does not incur any real quantization loss--you can duplicate the effects of a higher physical gain by filling the zeroes under the upshifted LSB of each pixel with randon noise.
 
I have the following issues with your presentation:
  • It assumes a tight coupling between the signal level hitting the A/D converter and the ISO, when this is not the case. It is true that increasing the signal to the A/D converter (via analog amplification) is one method that manufacturers use to increase the S part of the SNR to achieve higher ISO, but it is not the only method. I realize that you acknowledge this in your response, but it doesn't seem like you've internalized this point. The signal strength is a red herring as far as ISO is concerned. What matters is the manufacturer's ability to provide a certain SNR given a certain exposure level.
  • By focusing on the S part of SNR, your argument fails to distinguish between on chip binning and off-chip reduction since both have the exact same effect on the signal. You might claim that off-chip reduction can't have the same effect on the signal because a weak signal is too corrupted by noise - but that's precisely my point. The only reason for doing on chip binning is to reduce the effect of amplifier and A/D converter noise. Except for A/D converter and amplifier noise there would be no difference between on-chip binning and off chip reduction.
  • Since on-chip binning and off-chip reduction both have the exact same effect on the signal, the only advantage to binning can be in the N part of SNR, due to fewer amplification and A/D conversion steps and doing these steps with a larger signal.
I'm trying to imagine a case where we might sort of agree. How about this: Suppose you have no amplifier and an A/D converter with 0 noise (other than quantization noise in the LSB) up to k bits (for some smallish k), and then no information beyond k bits when fed signals directly from the CCD.

In this case if you combine four pixels off chip, quantization noise could be a significant factor because the error in the LSB from the A/D converter is no longer confined to just the LSB in your file and it is now spread over the lower 3 bits.

Now let's compare this with binning. For binning, we quadruple the signal before it hits the A/D converter, so the signal coming out is already 4X larger but we now have just the one bit of quantization noise because we're using the entire range of the A/D converter.

So, in the case where we have a noiseless (or non-existent) amplifier and an ideal A/D converter, we see that the difference between combining pixels off-chip and binning on-chip is in the quantization noise. The signal is the same in both cases, but the quantization noise is essentially 4X larger in the off chip case because you are summing together 4 pixels each with 4 quantization errors in their LSBs.

Now suppose that we have some additional sources of noise such as amplifier noise and more subtle A/D converter errors. An additional benefit of on-chip binning is that we only pay for these errors once if we combine pixels before they hit the amplifier and A/D converter.

On the other hand, we still have sources of noise like dark current and photon shot noise. These will be the same whether we bin on chip or combine pixels off chip. If these are the dominant noise sources, then knocking down the other errors may not help much.

So, if we're trying to answer the question of why on-chip binning might help vs. off-chip reduction, focusing entirely on the signal strength doesn't give the answer because both methods provide the same signal level in your final file. The difference is in the noise.

Make sense?

I suppose your point of view is that the increase in signal before we hit the A/D converter with binning is essential to the entire thing working and that the increased signal at this point should, in some sense, "get credit" for making the rest possible. I'm fine with that interpretation. What I don't like is the notion that binning directly increases the sensitivity of the sensor or that it directly increases the ISO because focusing on the signal alone does not explain how binning is different from combining in software. The key thing is that adding at a particular point in the image processing pipeline allows you to target a few specific noise sources - specifically, those introduced by amplification and A/D conversion.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 

Keyboard shortcuts

Back
Top