Noiseless photo.

I have thought about the 'dead time' in Quanta Image Sensor for sometime. By 'dead time' I mean the Jots could lose photons between frames in a stack.
Well, the target is about a billion jots/sensor, so that is about 24,000 lines or rows. At 1000fps, the rolling shutter readout time per row is 1/24000/1000 sec = 42 nsec. So this is the deadtime for a row - while it is being readout. That means a dead duty cycle of 42 nsec/1msec = 0.0042% (or 1/24000) which is mighty small.

Is this what you are concerned about? I think it is ok.
Thanks for the reply, Prof. Fossum. It is interesting to know this figure of merit. However, I by 'dead time' I mean the saturation effects as you recognize below.
Jots are binary with just {0,1} values, i.e., once they saturate ({1}), then any further photons could be ignored due to 'dead time' until the next frame when the jot is ready again.
OK, so by dead time maybe you mean a sort of saturation effect.
Yes.
Also, I should mention that we don't stack one jot over a number of fields. We sum up over an x-y-t "cubicle" of jots in the collected binary data cube. Strawman dimensions are a 16x16x16 cubicle size giving a FW for a single bit QIS of 4095e-.
That is good to know.
So 2 things besides just rescaling the white level in your jot image. (1) try summing cubicles of 16x16x16. (and you can overlap cubicles if you want) (2) feel free to play with a multibit jot concept.
I played with the multibit jot and produced the following simulation. I shall try the 16x16x16 and its various combinations thereof later. Right now I'm doing a very basic aggregation along pixels in the stack. In the following the white level was not scaled. Max exposure means the photon flux that can produce the max value in the pixel.

(The 1-bit, FPS = 0.1 x Max Exp is not totally black. There is some image that can be seen by increasing screen brightness.)

Jots sensor in various FPS vis-a-vis Max Exposure configurations.
Jots sensor in various FPS vis-a-vis Max Exposure configurations.
There is a lot of flexibility in image formation once you have the jot data cube collected (or generated in the case of your simulation). It is something that the computer science community is excited about, especially when there is motion in the image because the options for image formation are sort of limitless, except for processor power.
Yes, and that is why Jots sensor seems so interesting.
I can point you to people (papers) that have been trying more complex methods with improved results if you want to know more.
Any pointers will be helpful. Thanks.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
Jack Hogan wrote: Starting with an already noisy image makes things less relevant. Why don't you use a synthetic image in linear space to start with, like one of the ones that Jim K linked to?

Then it is relatively easy to generate a Poisson-noisy version. If the noise free image is MxN and you stick to 0-255 intensities for instance, you could generate 256 uniform frames with poissrnd(I,M,N) with I intensity from 0-255 - and simply substitute the pixel intensity in the noise free image with the same pixel of the Poisson frame generated with that intensity.
Shot noise is typically explained assuming a uniform illumination. In particular, each pixels is illuminated uniformly. If the image is well oversampled, that is OK as an assumption. It was not clear to me that what you suggest would work for an actual image that varies enough over a pixel.

You are right (ignoring the quantization error in the "noise free" image) and here is why. We can model the spatial variations of the illumination as an inhomogeneous Poisson process:

https://en.wikipedia.org/wiki/Poisson_point_process#Inhomogeneous_Poisson_point_process

Then the second displayed formula there is what you say. The reference is a book but my library does not provide electronic access to it. Its proof is not obvious to me at all but I an not an expert in that field.
Phew. You know me, knowing just enough to be dangerous :-)
BTW, the "noise free" and the "aliasing free" (according to the author) images posted above have visible aliasing along the edges of the color squares. That impression might be wrong and a good test would be to upscale by a large factor using Lanczos, which I have not done.
I can't see how aliasing free would be possible for a finite pixel image Why not use a crop of the 16-bit gamma 1.0 TIF and go do something else while Matlab generates 65k poisson frames. What the heck, computing power is cheap.

Jack
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
 
Last edited:
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
OK -- my bad. I misinterpreted your post. You mean if an nx sized photo made with the same exposure on an nx sized sensor were downsampled to the x size photo on the x size sensor where both sensors had the same sized pixels. Then, yes, that should be an equivalent situation.
 
Last edited:
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
OK -- my bad. I misinterpreted your post. You mean if an nx sized photo made with the same exposure on an nx sized sensor were downsampled to the x size photo on the x size sensor where both sensors had the same sized pixels. Then, yes, that should be an equivalent situation.
Ok, but even if you started with the same sized photo as the stacked, then downsample it to a smaller size, would you then consider that blurring?
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
OK -- my bad. I misinterpreted your post. You mean if an nx sized photo made with the same exposure on an nx sized sensor were downsampled to the x size photo on the x size sensor where both sensors had the same sized pixels. Then, yes, that should be an equivalent situation.
Ok, but even if you started with the same sized photo as the stacked, then downsample it to a smaller size, would you then consider that blurring?
Yes. That was what my original response was saying.
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
OK -- my bad. I misinterpreted your post. You mean if an nx sized photo made with the same exposure on an nx sized sensor were downsampled to the x size photo on the x size sensor where both sensors had the same sized pixels. Then, yes, that should be an equivalent situation.
Ok, but even if you started with the same sized photo as the stacked, then downsample it to a smaller size, would you then consider that blurring?
Yes. That was what my original response was saying.
Hmmm, but they are both identical situations, just scaled. Either there is blurring for both or not for either.
 
Last edited:
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
Exactly.

And you get blotchy noise instead of speckly noise. Blotchy tends to be more unpleasant.

However, if you sum the power of the noise across all frequencies, it will go down, simply because you removed some frequencies. So will the power of the signal.
 
Last edited:
I have thought about the 'dead time' in Quanta Image Sensor for sometime. By 'dead time' I mean the Jots could lose photons between frames in a stack.
Well, the target is about a billion jots/sensor, so that is about 24,000 lines or rows. At 1000fps, the rolling shutter readout time per row is 1/24000/1000 sec = 42 nsec. So this is the deadtime for a row - while it is being readout. That means a dead duty cycle of 42 nsec/1msec = 0.0042% (or 1/24000) which is mighty small.

Is this what you are concerned about? I think it is ok.
Thanks for the reply, Prof. Fossum. It is interesting to know this figure of merit. However, I by 'dead time' I mean the saturation effects as you recognize below.
Jots are binary with just {0,1} values, i.e., once they saturate ({1}), then any further photons could be ignored due to 'dead time' until the next frame when the jot is ready again.
OK, so by dead time maybe you mean a sort of saturation effect.
Yes.
Also, I should mention that we don't stack one jot over a number of fields. We sum up over an x-y-t "cubicle" of jots in the collected binary data cube. Strawman dimensions are a 16x16x16 cubicle size giving a FW for a single bit QIS of 4095e-.
That is good to know.
So 2 things besides just rescaling the white level in your jot image. (1) try summing cubicles of 16x16x16. (and you can overlap cubicles if you want) (2) feel free to play with a multibit jot concept.
I played with the multibit jot and produced the following simulation. I shall try the 16x16x16 and its various combinations thereof later. Right now I'm doing a very basic aggregation along pixels in the stack. In the following the white level was not scaled. Max exposure means the photon flux that can produce the max value in the pixel.
One of the things my former MS student did was to take the original baseline (groundtruth as you call it) image, and increase its resolution by some clever interpolation scheme (e.g. 256x256 -> 4096x4096), and then use each new pixel as a jot. That way when she did the 16x16 x16 cubicles, she got back to 256x256 resolution after image formation (non-overlapping cubicles).

I have not yet her thesis on line yet. I still need to get a bona fide research page for my group set up. Too much to do, too little time.

She also looked at the effect of various reconstruction algorithms on MTF using synthetic images.
(The 1-bit, FPS = 0.1 x Max Exp is not totally black. There is some image that can be seen by increasing screen brightness.)
Nice cats. Looks like things are working well, at least at this resolution.

I guess you should be able to see the effect of overexposure latitude (related to your dead time or saturation) depending on how you set the photon flux relative to white in the baseline image. I am not sure we tried that with a non-synthetic baseline image, like your cat. (and we mostly used Lena or some other classic images).
Jots sensor in various FPS vis-a-vis Max Exposure configurations.
Jots sensor in various FPS vis-a-vis Max Exposure configurations.
There is a lot of flexibility in image formation once you have the jot data cube collected (or generated in the case of your simulation). It is something that the computer science community is excited about, especially when there is motion in the image because the options for image formation are sort of limitless, except for processor power.
Yes, and that is why Jots sensor seems so interesting.
I can point you to people (papers) that have been trying more complex methods with improved results if you want to know more.
Any pointers will be helpful. Thanks.
OK. The main work started at EPFL with Martin Vetterli (coincidentally a friend from when we were both asst. profs at Columbia, and now head of EPFL) and his team when they looked at "gigavision" cameras. See:

Sbaiz, L.; Yang, F.; Charbon, E.; Süsstrunk, S.; Vetterli, M. The gigavision camera. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal, ICASSP 2009, Taipei, Taiwan, 19–24 April 2009.

Yang, F.; Sbaiz, L.; Charbon, E.; Süsstrunk, S.; Proc. SPIE 2010, 7537, doi:10.1117/12.840015.

Yang, F.; Lu, Y.M.; Sbaiz, L.; Vetterli, M. Bits from photons: oversampled image acquisition using binary Poisson statistics. IEEE Trans. Image Process. 2012, 21, 1421–1436.

Then Yue Lu went to Harvard and Stan Chan was his postdoc: (but they miswrote the title as "quantum" instead of "quanta"...oops)

S. H. Chan and Y. M. Lu, “Efficient image reconstruction for gigapixel quantum image sensors,” in Proc IEEE Global Conf. on Signal and Information Processing (GlobalSIP’14), Dec 2014, pp. 312–316.

And most recently, Stan is now an asst. prof. at Purdue. See this:


Also, Neale Dutton et al. at ST Micro and a recent PhD from Univ. Edinburgh, published this work (among several):

Gyongy, I.; Dutton, N.; Parmesan, L.; Davies, A.; Saleeb, R.; Duncan, R.; Rickman, C.; Dalgarno, P.; Henderson, R.K. Bit-plane processing techniques for low-light, high-speed imaging with a SPAD-based QIS. In Proceedings of the 2015 International Image Sensor Workshop (IISW), Vaals, The Netherlands, 8–11 June 2015.

Dutton, N.A.W.; Parmesan, L.; Gnecchi, S.; Gyongy, I.; Calder, N.; Rae, B.R.; Grant, L.A.; Henderson, R.K. Oversampled ITOF imaging techniques using SPAD-based quanta image sensors. In Proceedings of the 2015 International Image Sensor Workshop (IISW), Vaals, The Netherlands, 8–11 June 2015.

Dutton, N.A.; Gyongy, I.; Parmesan, L.; Gnecchi, S.; Calder, N.; Rae, B.; Pellegrini, S.; Grant, L.A.; Henderson, R.K. A SPAD-based QVGA image sensor for single-photon counting and quanta imaging. IEEE Trans. Electron Dev. 2016, 63, 189–196.

Some related papers are in this special issue, but not really on QIS image reconstruction. Still you might find it interesting to browse. My own paper should be published this week or next in this issue. I am not involved whatsoever in the review and editorial process for my own paper so not sure of the timing. Stay tuned for that.


And after spending some time with Perona at Caltech, he and his PhD student Bo Chen looked at this related work which I personally find pretty interesting. It is down near the bottom of the list of the special issue articles:

Chen B, Perona P. Vision without the Image. Sensors. 2016; 16(4):484
 
Jack Hogan wrote: Starting with an already noisy image makes things less relevant. Why don't you use a synthetic image in linear space to start with, like one of the ones that Jim K linked to?

Then it is relatively easy to generate a Poisson-noisy version. If the noise free image is MxN and you stick to 0-255 intensities for instance, you could generate 256 uniform frames with poissrnd(I,M,N) with I intensity from 0-255 - and simply substitute the pixel intensity in the noise free image with the same pixel of the Poisson frame generated with that intensity.
Shot noise is typically explained assuming a uniform illumination. In particular, each pixels is illuminated uniformly. If the image is well oversampled, that is OK as an assumption. It was not clear to me that what you suggest would work for an actual image that varies enough over a pixel.

You are right (ignoring the quantization error in the "noise free" image) and here is why. We can model the spatial variations of the illumination as an inhomogeneous Poisson process:

https://en.wikipedia.org/wiki/Poisson_point_process#Inhomogeneous_Poisson_point_process

Then the second displayed formula there is what you say. The reference is a book but my library does not provide electronic access to it. Its proof is not obvious to me at all but I an not an expert in that field.
Phew. You know me, knowing just enough to be dangerous :-)
You know me, too. Seeing problems where others see solutions. :-)
BTW, the "noise free" and the "aliasing free" (according to the author) images posted above have visible aliasing along the edges of the color squares. That impression might be wrong and a good test would be to upscale by a large factor using Lanczos, which I have not done.
I can't see how aliasing free would be possible for a finite pixel image.
It could be a finite crop of an almost aliasing free image.
Why not use a crop of the 16-bit gamma 1.0 TIF and go do something else while Matlab generates 65k poisson frames. What the heck, computing power is cheap.
Why not randomize each pixels one by one? This seems to be what is suggested here:


in 3.1.1. I am not familiar with that matlab function however.
 
Last edited:
Let us take as an example a deep shadow spot in an overall normally bright image. Say that the mean values there are around 4. After 20 stacks, to change that number, you need values >60.
If the value of convergence is 3.9999999999999999999999999, the average value is going to keep walking and walking and walking back and forth from 3 to 4. The less precision, the sooner and given percentage of pixel values will stabilize, but those right at the rounding borders will just keep on walking; the more precision, the longer.
That is quite improbable; how much - depends on the size of the dark area. So you stop at the 20th image.

If you keep the averages in a higher precision format, then you can do much better.
It will take longer for any number of pixel values to stabilize, but some might sit on the quantization cusps and never stabilize, even with coarse quantization.

This whole thing is crazy though. You can only stabilize with quantization, and only for values converging safely in the middle of a quantization bin. If you do not quantize, you never stabilize any pixel values, as when you simply add the image series with the scale of the values increasing with each addition, or simulate it with on-demand precision.
EDIT: I edited some of the text above. I will post numerical simulations. The noise stabilizes; the image looks quite good but when you plot the difference, it is not too small even with 1,000 images.
I would expect at least a few pixels to keep twinkling back and forth from one value to another. The underlying analog convergent image, not just the number of averaged/quantized frames, determines ho long it takes for any percentage of pixel values to stabilize, and very few images are converging purely towards easy underlying values to stabilize like 0.5, 1.5, 2.5, 100.5, etc.

Again, a non-quantized output will never stabilize.
 
Let us take as an example a deep shadow spot in an overall normally bright image. Say that the mean values there are around 4. After 20 stacks, to change that number, you need values >60.
If the value of convergence is 3.9999999999999999999999999, the average value is going to keep walking and walking and walking back and forth from 3 to 4. The less precision, the sooner and given percentage of pixel values will stabilize, but those right at the rounding borders will just keep on walking; the more precision, the longer.
To change an integer value, you need to perturb it by at least 0.5, With GB's algorithm, the perturbation can happen in the step j/(j+1) (multiplying the old average) and 1/j (multiplying the new noisy image). With max values 255, and j>1024, each change will be at most 1/4, so no change will happen.
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
If the amount of light captured remains the same, but is now compressed (by downsampling) to the same size as the smaller stacked photo, then both photos end up containing the same density of light (same amount of light, same size). Normalization of both photos should then push the relative noise level equally as low. Why do you consider one process blurring and the other reduction if the SD and SNR is the same? The amount of detail should be comparable as well.
OK -- my bad. I misinterpreted your post. You mean if an nx sized photo made with the same exposure on an nx sized sensor were downsampled to the x size photo on the x size sensor where both sensors had the same sized pixels. Then, yes, that should be an equivalent situation.
Ok, but even if you started with the same sized photo as the stacked, then downsample it to a smaller size, would you then consider that blurring?
Yes. That was what my original response was saying.
Hmmm, but they are both identical situations, just scaled. Either there is blurring for both or not for either.
Perhaps you're misinterpreting my misinterpretation. ;-) What you initially said was correct; what I mistakenly thought you said was not.
 
Why not use a crop of the 16-bit gamma 1.0 TIF and go do something else while Matlab generates 65k poisson frames. What the heck, computing power is cheap.
Why not randomize each pixels one by one? This seems to be what is suggested here:

https://arxiv.org/pdf/1412.4031.pdf
Nice find! I wish I had seen this paper before all my elucubrations. Saved for later review.
in 3.1.1. I am not familiar with that matlab function however.
Poissrnd is the function I suggested above. The difference between my approach and theirs is that they seem to imply that poissrnd can take a 2D image as the input intensity (I). Didn't know that. I wonder what happens if I is a different size than MxN. If it does work like that it may very well be the best way to poissonize a noiseless image in Matlab.

Jack
 
Last edited:
If the value of convergence is 3.9999999999999999999999999, the average value is going to keep walking and walking and walking back and forth from 3 to 4. The less precision, the sooner and given percentage of pixel values will stabilize, but those right at the rounding borders will just keep on walking; the more precision, the longer.
To change an integer value, you need to perturb it by at least 0.5, With GB's algorithm, the perturbation can happen in the step j/(j+1) (multiplying the old average) and 1/j (multiplying the new noisy image). With max values 255, and j>1024, each change will be at most 1/4, so no change will happen.
I wouldn't even put such an algorithm into the real of consideration. An 8-bit working space for such a project is a disaster. 8 bits can't even handle low-noise imagery; it requires a minimum amount of noise to prevent visible posterization.

I was thinking along the lines of maintaining a floating point or deep-bit summation working image and a lower-precision integer representation of it.
 
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
The down sampled image shouldn't look any blurrier if you are viewing it at a proportionately smaller size. (Assuming that the goal is to get a smaller, less noisy test image from a larger source image)
Exactly.

And you get blotchy noise instead of speckly noise. Blotchy tends to be more unpleasant.
Similarly, the noise will still appear to be "speckly" when viewed at the proportionate size.
However, if you sum the power of the noise across all frequencies, it will go down, simply because you removed some frequencies. So will the power of the signal.
I think the variance of the down sampled noise will be reduced by the down sampling ratio, but for many or most realistic images the variance of the signal will hardly be affected.
 
Last edited:
We all know that a noiseless photo does not exist. Let's consider it the same as the supremum or infemum of a sequence. For example, the infemum of {1/n} is zero, even though zero is not in the sequence.

In the case of a noiseless photo, I would describe the sequence as stacked photos. We take a single exposure. Then we stack and average it with another exposure. Then we stack and average another. We continue this process until none of the values in the image file change.

Does this work for a working definition of a "noiseless photo"?
What if instead of stacking n photos to achieve a noiseless photo you downsampled an nx larger photo (made by an nx sensor) to achieve the same sized final photo? Would it be noiseless?
The downsampling wouldn't change the amount of light the photo was made from, so it would merely blur out higher frequencies of noise, as opposed to being less noisy.
Exactly.

And you get blotchy noise instead of speckly noise. Blotchy tends to be more unpleasant.

However, if you sum the power of the noise across all frequencies, it will go down, simply because you removed some frequencies. So will the power of the signal.
I was talking more idealistically where the noise would be purely random. I realize actual image noise has a grain-like low frequency component that makes it look blotchy (and passes), which leads me to my next question. What causes that low frequency blotchy noise? Is it an artifact of the demosaicing algorithm trying to correlate pixels to extract detail?
 
Last edited:
One of the things my former MS student did was to take the original baseline (groundtruth as you call it) image, and increase its resolution by some clever interpolation scheme (e.g. 256x256 -> 4096x4096), and then use each new pixel as a jot. That way when she did the 16x16 x16 cubicles, she got back to 256x256 resolution after image formation (non-overlapping cubicles).
I think that is an interesting idea.
Nice cats.
Thanks. The adorable cat is the famous 'Goofs', who is perhaps the goofiest cat in the world.
Looks like things are working well, at least at this resolution.
Yes, things look good right now. I shall try to run something over the next weekend, that is when I get some time to do some hobbyist work, like this past weekend for writing code and generating images presented earlier.
I guess you should be able to see the effect of overexposure latitude (related to your dead time or saturation) depending on how you set the photon flux relative to white in the baseline image.
What I have noticed from the images presented in the post before is the following relation for good reproduction (approx.):

(FPS x 2^Jots_bit_depth / photon_flux_exposure_for_pixel_saturation) >= 1
I can point you to people (papers) that have been trying more complex methods with improved results if you want to know more.
Any pointers will be helpful. Thanks.
OK.
Thanks for the publication list. I really appreciate the history and context you provided for the researchers. That puts things into perspective. Your invention, Quanta or Jots Sensor, is very interesting in the sense that based upon the publication list that you provided, it seems like that it is bringing together people in various fields such as sensor design and electronics, statistics, estimation theory, signal processing, and computer vision, to name a few. It is becoming a true multi-disciplinary work.

I have met Prof. Perona. And, Prof. Vetterli is well-known from his book on wavelets and other publications.

I shall try to access these publications. Hopefully some of them will be freely available online. I don't have the luxury of academic subscriptions.

Thanks again.

--
Dj Joofa
http://www.djjoofa.com
 
Last edited:
What I have noticed from the images presented in the post before is the following relation for good reproduction (approx.):
(FPS x 2^Jots_bit_depth / photon_flux_exposure_for_pixel_saturation) >= 1
I want to try to understand this. FPS is the field or frame readout rate, like, 1000fps.

2^n is the number of electrons that can be stored, where n=bit depth.

(actually 2^n -1 ...e.g. 15 for 4b)

I would say this is the FWC of the jot.

The denominator term also looks like the FWC of the jot. Is that right?

Unfortunately this interpretation of your equation does not make sense to me because it simplifies to FPS >= 1 and the units are wrong.

If the denominator is "time for saturation" (e.g. 10msec) then the equation makes sense.

A few years ago I defined the flux capacity (collected electrons/sec) as

flux_cap = FPS x ((2^n)-1)

Meaning this is the maximum number of incident photons (assuming QE=1 etc.) that can be handled by the QIS with good image quality. Actually, due to the overexposure latitude, this number could be 5x higher.

It seems like it is the same conclusion you reached, if my 2nd interpretation of your equation is correct.
I can point you to people (papers) that have been trying more complex methods with improved results if you want to know more.
Any pointers will be helpful. Thanks.
OK.
Thanks for the publication list. I really appreciate the history and context you provided for the researchers. That puts things into perspective. Your invention, Quanta or Jots Sensor, is very interesting in the sense that based upon the publication list that you provided, it seems like that it is bringing together people in various fields such as sensor design and electronics, statistics, estimation theory, signal processing, and computer vision, to name a few. It is becoming a true multi-disciplinary work.

I have met Prof. Perona. And, Prof. Vetterli is well-known from his book on wavelets and other publications.

I shall try to access these publications. Hopefully some of them will be freely available online. I don't have the luxury of academic subscriptions.

Thanks again.
Thanks for your interest in the QIS! Write me at Dartmouth if there are any you want to read but cannot get. Pay walls for academic publications are one of my pet peeves.
 

Keyboard shortcuts

Back
Top