Baffling Pentax KP Read Noise

Hello,

As a disclaimer, I'm not a sensor chip designer (although I do have a related patent on electron leakage reduction for night vision sensors), I'm an analog/RF chip/system designer. So please take my comments lightly and don't rip me to shreds :-|

With the pixel Source Follower (SF) as the first amplifier in the sensor voltage channel chain, this seems as a major source of electronic induced noise (like the LNA in RF receiver systems) that can't be reduced with post gain (sets the lower overall system "noise figure"). Since the SF has less than unity voltage gain, the system input referred noise is rather high, higher than if it were a Common Source (CS) which has potentially higher than unity voltage gain.

I've captured a screen shot of the Aptina mentioned White Paper, shown below. The left is Figure 5 from the white paper and the right side is a modification shown in orange. The SF amplifier is replaced with a CS amplifier by using a PMOS instead of a NMOS device and the source is returned to the RS line which I assume is a higher voltage when active than supply Vaa_PIX.

The voltage output Vout is inverted since the SF is a follower and the CS is an inverting amplifier. Vout can swing a higher peak to peak voltage than the SF which may be beneficial with less post amplification required and since the gain magnitude may be greater than unity the overall system input referred noise could be lower as well. No additional control, power nor signal lines are required either.

Anyway, this was just a thought. Please let me know what you think.

Best,

Pixel Level Common Source Amplifier Concept shown in Orange
Pixel Level Common Source Amplifier Concept shown in Orange
hi Mike,

Thank you for sharing your CS idea. Generally a CS amp will work but any improvement is offset by a few factors. (1) CS-drain is about equal to Vout so you get gate-drain Miller capacitance (2) current fluctuations in CS (origin of 1/f noise) are also amplified in this configuration and (3) implementing PMOS in the pixel is not convenient due to layout rules. It is a common suggestion (usually NMOS CS) and some people have made it work, although not really with any compelling advantage. Thus it has not made it to widespread use. Actually, I am not sure anyone has tried PMOS CS - perhaps the reduced intrinsic 1/f noise will help.

Anyway, try simulating this and see what you get!

Eric
Eric,

Thanks, very familiar with 1/f noise and the Miller effects, always try and use Cascode stages to help when possible.

If the output of the PMOS CS amplifier was a current rather than voltage, which would be routed thru the NMOS RS device, with a variable effective load a variable voltage gain referenced to the pixel could be achieved. A common use low noise Cascode Stage could make the impedance as seen from the PMOS drain very low thus minimizing the Miller effect and isolating a variable transconductance stage for variable voltage gain control.

Yes, I probably should try and get this simulated sometime.

Thanks for the reply and notes.

Best,

--
~Mike~
Mike, it looked to me like RS was not biased as a cascode device but perhaps that was your intention all along. On the other hand, the CS source voltage will be hard to keep quiet in an array configuration and I believe that voltage level is quite important.

Variable gain for one pixel is nice -- the problem often occurs when you try to make an entire array of millions of pixels behave exactly the same way without FPN or PRNU.

I had one PhD student who probably spent a month trying to make a CS amplifier work and could not get better input-referred noise than by making the input cap to the SF as small as possible.

If you are successful in your simulations, please write me at my Dartmouth email address, which is easy to find, for further discussions. Maybe you have already done the simulations since it is a relatively easy circuit.

BTW, current mode readout has also been explored over the years. So far no real success. Buffer-followers seems to work the best, so far...
 
The Pentax K1 II has draw a great deal of attention lately because of strong Noise Reduction (NR) applied starting at ISO 640.
Does the processing applied by the KP appear to be different (in nature and/or strength) to what the K-1 II is applying?
The K-1 II appears to be stronger. This isn't a totally objective opinion.
It's based on the difficultly I'm having getting Photographic Dynamic Range (PDR) measurements. No other camera (even the KP) has given me so much trouble.
DPReview praised the KP for its "excellent high ISO performance in both Raw and JPEG" and only mentioned suspected RAW baking in passing. The K-1 II, on the other hand, got slapped for its processing. I'm wondering whether that's due to different processing or a different assessment of the same situation.
Who knows? Some of this is subjective and if the KP review were revisited perhaps a caveat would be inserted. Noise Reduction (NR) does make many images look better at first blush.
FWIW, I'm against any in-camera destructive RAW baking in any shape or form.
Good to know you're a member of the choir.
Of course there are uncontentious system-noise reduction strategies performed within the sensor, but post-A/D "denoising" should be left to out-of-camera processing, AFAIC.
Ditto.
Am I right in assuming that the "accelerator" chip Pentax has been employing could most likely be replaced by out-of-camera processing? Given the "closed black boxes" modern Sony sensors appear to be, I have difficulty believing that the "accelerator" chip leverages any sensor-internal data or processes.
Potentially, the "accelerator" chip could make use of the equivalent of "dark frames", etc., i.e., data that is normally not available outside a camera (but could be provided as secondary data), but from what I've seen in the analyses conducted so far, it seems that the "accelerator" attempts "beautification" with a nearest-neighbour smoothing component as part of its data massaging.
Naturally if the processor uses information that isn't available later in the raw data then it's operation cannot be duplicated later.
In this case, I suspect the algorithm could be applied later.
In any case, most people agree that there should be a way to turn it off.
I hope this is not considered to be an off-topic post.
Not by me.
BTW, @bclaff, have you noticed my proposal to use "image stacking" for analysis purposes?
While FT plots and power spectra are useful in detecting image manipulation, when applied to images of pure noise they are not informative regarding the retention of signal. In other words, if some image manipulation did a miraculous job of almost not harming signal but only combating noise it may deserve less scolding than an alternative that simply attenuates high spatial frequencies.
I'll be the first to argue that in general it is impossible to distinguish signal (here meant to be "information" present in a scene) from noise (given the stochastic nature of light itself) by evaluating a single image, but could it still be useful to compare denoising strategies by averaging over many images (in an image stack) and then evaluating which denoising strategy supports the recovery of signal through averaging better than others?
I do analyze stacks of images to test for Fixed Pattern Noise (FPN) and develop values for things like Full Well Capacity (FWC).
I haven't done this at ISO 640 and that might be interesting.

Jim Kasson does analysis regarding frequency and has noted a sharp drop at higher frequencies. As I recall the drop is equivalent to about a 3 pixel spacing but Jim could speak better to that.
In any case, we don't seem to have a good objective measure for "detail" lost but the noise reduction is so dramatic it's hard (but possible) to imagine that there isn't significant smoothing.
BTW, attempting to work with scenes that have content (as opposed to "lens cap" shots which I'm assuming you are using) could circumvent potential optimisations by some manufacturers.
Sony, for instance, had a line of CD players that switched off analogue circuitry upon detecting a "zero" stream of signal. This led to phenomenal dynamic range measurements which were, of course, of no practical relevance. As soon as any information was fed to the A/D converters, the noise floor was significantly raised by the re-activated circuitry.
Are we sure something similar is not happening with "lens cap" images?
No all images I analyze are "lens/body" cap shots; but the others are too exciting either.
For my data collection I need a protocol that can be easily duplicated by normal photographers since I get raw files from people around the world.
So what you propose would be better for a dpreview than PhotonsToPhotos.

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )
 
Last edited:
The K-1 II appears to be stronger. This isn't a totally objective opinion.
First, thanks a lot for your reply!

Second, I believe it would be desirable to have a measure of "detail loss".
I believe MTF measurements of image stacks could be helpful in this regard. I'm proposing image stacks as it would be good to remove variable noise from the equation as much as possible.

Destructive algorithms would be exposed by not allowing the reconstruction of detail through stacking. I understand your problem is that your contributors would have to provide shots of very specific scenes (e.g., resolution test charts). I can see the many, many pitfalls in such a requirement.
It's based on the difficultly I'm having getting Photographic Dynamic Range (PDR) measurements. No other camera (even the KP) has given me so much trouble.
What is the issue?

I hope Ricoh have not started to truncate black values (like Nikon).
Some of this is subjective and if the KP review were revisited perhaps a caveat would be inserted.
Yes, perhaps. Possibly DPReview's glowing KP review even encouraged Ricoh to play the same tricks with the K-1 II, little did Ricoh know what was to come...
Noise Reduction (NR) does make many images look better at first blush.
Not as far as I'm concerned. The smoothing looks unnatural and squinting one's eyes (poor man's LPF) often quickly reveals that there is more detail in an untreated noisy imaged compared to a processed image.
I do analyze stacks of images to test for Fixed Pattern Noise (FPN) and develop values for things like Full Well Capacity (FWC).
I haven't done this at ISO 640 and that might be interesting.
I would expect the difference between a single ISO 100 file versus a 32 image ISO 100 stack to be bigger than the difference between a single ISO 640 file and its corresponding 32 image stack.
In any case, we don't seem to have a good objective measure for "detail" lost but the noise reduction is so dramatic it's hard (but possible) to imagine that there isn't significant smoothing.
It is very interesting that you consider the possibility of no significant smoothing. Aren't your 2D FT plots incompatible with this idea? They clearly seem to indicate high spatial frequency attenuation and I've never heard of hardware-induced sensel correlation (but then I'm not an expert in this field). Furthermore, I have a lot of difficulty believing Ricoh managed to find a way to attenuate system-noise only.
No all images I analyze are "lens/body" cap shots; but the others are too exciting either.
I understand.

It would just be good to avoid manufacturers pulling a fast one by recognising extreme conditions (one criterion could be that exposed sensels deliver the same results as masked sensels) and performing beautification accordingly.
For my data collection I need a protocol that can be easily duplicated by normal photographers since I get raw files from people around the world.
Sure. If I can help, please let me know. I have a K-1, K-5 II, and K100D in my possession. No K-1 II, sadly, and I won't acquire one if the processing remains mandatory.

--
http://www.flickr.com/photos/class_a/
 
Last edited:
It's based on the difficultly I'm having getting Photographic Dynamic Range (PDR) measurements. No other camera (even the KP) has given me so much trouble.
What is the issue?

I hope Ricoh have not started to truncate black values (like Nikon).
The strength of the Pentax Noise Reduction (NR) seems to be a function of brightness; not necessarily in a smooth way but perhaps using a threshold.

In any case, the test images I use for PDR need to be significantly darker (looking like 4 stops darker) than normal for this camera.
I hope to have a new set of images within 24hrs if my collaborator doesn't give up on me.
I do analyze stacks of images to test for Fixed Pattern Noise (FPN) and develop values for things like Full Well Capacity (FWC).
I haven't done this at ISO 640 and that might be interesting.
I would expect the difference between a single ISO 100 file versus a 32 image ISO 100 stack to be bigger than the difference between a single ISO 640 file and its corresponding 32 image stack.
I make no comparison between a single image and the stack.
I use them for different purposes.
In any case, we don't seem to have a good objective measure for "detail" lost but the noise reduction is so dramatic it's hard (but possible) to imagine that there isn't significant smoothing.
It is very interesting that you consider the possibility of no significant smoothing. Aren't your 2D FT plots incompatible with this idea? They clearly seem to indicate high spatial frequency attenuation and I've never heard of hardware-induced sensel correlation (but then I'm not an expert in this field). Furthermore, I have a lot of difficulty believing Ricoh managed to find a way to attenuate system-noise only.
I think there is significant smoothing but since I have no way to quantify that I have to be open to the (highly unlikely) possibility that the smoothing isn't significant.
No all images I analyze are "lens/body" cap shots; but the others are too exciting either.
I understand.

It would just be good to avoid manufacturers pulling a fast one by recognising extreme conditions (one criterion could be that exposed sensels deliver the same results as masked sensels) and performing beautification accordingly.
Although the measurements at PhotonsToPhotos are somewhat technical they are generally as observed in the final image; the intent is to show values that could be photographically relevant.
For example, read noise measurements are from single frames not subtracted pairs.

I'm not in the business of "catching cheaters" but do try to accurately identify where Noise Reduction (NR) is unavoidable or raw data is digitally scaled.
This is reflected in the symbols on the charts.
For my data collection I need a protocol that can be easily duplicated by normal photographers since I get raw files from people around the world.
Sure. If I can help, please let me know. I have a K-1, K-5 II, and K100D in my possession. No K-1 II, sadly, and I won't acquire one if the processing remains mandatory.
I'll keep this in mind if files for a reasonable A/B test arises in the future.
 
The strength of the Pentax Noise Reduction (NR) seems to be a function of brightness; not necessarily in a smooth way but perhaps using a threshold.

In any case, the test images I use for PDR need to be significantly darker (looking like 4 stops darker) than normal for this camera.
I hope to have a new set of images within 24hrs if my collaborator doesn't give up on me.
I had found a while back that their NR seemed best explained by the applications of NR in a non RGB space, and that the difference between Pentax and say Fuji was more apparent in L-Y-X space like CIE Lab or HSL. I am not claiming that is happening in this case, however it is a mode of analysis that may paint a different picture - especially since it looks weird.

-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
http://blog.trafficshaper.com -- Blog
 
The strength of the Pentax Noise Reduction (NR) seems to be a function of brightness; not necessarily in a smooth way but perhaps using a threshold.

In any case, the test images I use for PDR need to be significantly darker (looking like 4 stops darker) than normal for this camera.
I hope to have a new set of images within 24hrs if my collaborator doesn't give up on me.
I had found a while back that their NR seemed best explained by the applications of NR in a non RGB space, and that the difference between Pentax and say Fuji was more apparent in L-Y-X space like CIE Lab or HSL. I am not claiming that is happening in this case, however it is a mode of analysis that may paint a different picture - especially since it looks weird.
That's an interesting observation but in this case we're talking about signal processing of the raw data so I'm not sure how that would apply.
 
That's an interesting observation but in this case we're talking about signal processing of the raw data so I'm not sure how that would apply.
So I was suggesting it may make it easier to identify if this is an uncommon NR method being applied to the data once read out into the frame buffer or if this is an uncommon NR method - which seems to be the question at hand.

-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
http://blog.trafficshaper.com -- Blog
 
That's an interesting observation but in this case we're talking about signal processing of the raw data so I'm not sure how that would apply.
So I was suggesting it may make it easier to identify if this is an uncommon NR method being applied to the data once read out into the frame buffer or if this is an uncommon NR method - which seems to be the question at hand.
It happens between the pixel and the raw file; as you proposing a way to narrow that down?
 
It happens between the pixel and the raw file; as you proposing a way to narrow that down?
If you see a turtle on a fence post you know it didn't get there by itself.

If you see correlations / entropy loss along one axis, that speaks to something or several somethings happening in the serial ( 1-D ) handling of the signal.

If that something turns on / off abruptly you know it is not an intrinsic part of the sensor hardware, like crosstalk between adjacent pixels or quantization errors.

If that something happens preferentially or over a domain following the CFA, that is probably a tell as well.

There are some other things. But the more definitive is 1-D manipulations where row or column adjacent samples are more highly correlated.

Some of these you need to throw different types of patterns at the sensor to exercise them, especially things involving a bandwidth bottleneck between pixels and the ADC where it might be convenient to interpose some analog signal processing, in addition to an amplifier, that is.

Some of these tells point to different stages in the pipeline from pixel to RAW file.

Then looping back, what is "weird" may be less weird when you look at the data from the perspective of a different color space. Processing only luminance, for example, turns into processing all three RGB scalars. that may make the FFT findings harder to interpret.

-- Bob
http://bob-o-rama.smugmug.com -- Photos
http://www.vimeo.com/boborama/videos -- Videos
http://blog.trafficshaper.com -- Blog
 
Last edited:

Keyboard shortcuts

Back
Top