I have thought about the 'dead time' in Quanta Image Sensor for sometime. By 'dead time' I mean the Jots could lose photons between frames in a stack.
Well, the target is about a billion jots/sensor, so that is about 24,000 lines or rows. At 1000fps, the rolling shutter readout time per row is 1/24000/1000 sec = 42 nsec. So this is the deadtime for a row - while it is being readout. That means a dead duty cycle of 42 nsec/1msec = 0.0042% (or 1/24000) which is mighty small.
Is this what you are concerned about? I think it is ok.
Thanks for the reply, Prof. Fossum. It is interesting to know this figure of merit. However, I by 'dead time' I mean the saturation effects as you recognize below.
Jots are binary with just {0,1} values, i.e., once they saturate ({1}), then any further photons could be ignored due to 'dead time' until the next frame when the jot is ready again.
OK, so by dead time maybe you mean a sort of saturation effect.
Yes.
Also, I should mention that we don't stack one jot over a number of fields. We sum up over an x-y-t "cubicle" of jots in the collected binary data cube. Strawman dimensions are a 16x16x16 cubicle size giving a FW for a single bit QIS of 4095e-.
That is good to know.
So 2 things besides just rescaling the white level in your jot image. (1) try summing cubicles of 16x16x16. (and you can overlap cubicles if you want) (2) feel free to play with a multibit jot concept.
I played with the multibit jot and produced the following simulation. I shall try the 16x16x16 and its various combinations thereof later. Right now I'm doing a very basic aggregation along pixels in the stack. In the following the white level was not scaled. Max exposure means the photon flux that can produce the max value in the pixel.
One of the things my former MS student did was to take the original baseline (groundtruth as you call it) image, and increase its resolution by some clever interpolation scheme (e.g. 256x256 -> 4096x4096), and then use each new pixel as a jot. That way when she did the 16x16 x16 cubicles, she got back to 256x256 resolution after image formation (non-overlapping cubicles).
I have not yet her thesis on line yet. I still need to get a bona fide research page for my group set up. Too much to do, too little time.
She also looked at the effect of various reconstruction algorithms on MTF using synthetic images.
(The 1-bit, FPS = 0.1 x Max Exp is not totally black. There is some image that can be seen by increasing screen brightness.)
Nice cats. Looks like things are working well, at least at this resolution.
I guess you should be able to see the effect of overexposure latitude (related to your dead time or saturation) depending on how you set the photon flux relative to white in the baseline image. I am not sure we tried that with a non-synthetic baseline image, like your cat. (and we mostly used Lena or some other classic images).

Jots sensor in various FPS vis-a-vis Max Exposure configurations.
There is a lot of flexibility in image formation once you have the jot data cube collected (or generated in the case of your simulation). It is something that the computer science community is excited about, especially when there is motion in the image because the options for image formation are sort of limitless, except for processor power.
Yes, and that is why Jots sensor seems so interesting.
I can point you to people (papers) that have been trying more complex methods with improved results if you want to know more.
Any pointers will be helpful. Thanks.
OK. The main work started at EPFL with Martin Vetterli (coincidentally a friend from when we were both asst. profs at Columbia, and now head of EPFL) and his team when they looked at "gigavision" cameras. See:
Sbaiz, L.; Yang, F.; Charbon, E.; Süsstrunk, S.; Vetterli, M. The gigavision camera. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal, ICASSP 2009, Taipei, Taiwan, 19–24 April 2009.
Yang, F.; Sbaiz, L.; Charbon, E.; Süsstrunk, S.;
Proc. SPIE 2010,
7537, doi:10.1117/12.840015.
Yang, F.; Lu, Y.M.; Sbaiz, L.; Vetterli, M. Bits from photons: oversampled image acquisition using binary Poisson statistics.
IEEE Trans. Image Process. 2012,
21, 1421–1436.
Then Yue Lu went to Harvard and Stan Chan was his postdoc: (but they miswrote the title as "quantum" instead of "quanta"...oops)
S. H. Chan and Y. M. Lu, “Efficient image reconstruction for gigapixel quantum image sensors,” in Proc IEEE Global Conf. on Signal and Information Processing (GlobalSIP’14), Dec 2014, pp. 312–316.
And most recently, Stan is now an asst. prof. at Purdue. See this:
Also, Neale Dutton et al. at ST Micro and a recent PhD from Univ. Edinburgh, published this work (among several):
Gyongy, I.; Dutton, N.; Parmesan, L.; Davies, A.; Saleeb, R.; Duncan, R.; Rickman, C.; Dalgarno, P.; Henderson, R.K. Bit-plane processing techniques for low-light, high-speed imaging with a SPAD-based QIS. In Proceedings of the 2015 International Image Sensor Workshop (IISW), Vaals, The Netherlands, 8–11 June 2015.
Dutton, N.A.W.; Parmesan, L.; Gnecchi, S.; Gyongy, I.; Calder, N.; Rae, B.R.; Grant, L.A.; Henderson, R.K. Oversampled ITOF imaging techniques using SPAD-based quanta image sensors. In Proceedings of the 2015 International Image Sensor Workshop (IISW), Vaals, The Netherlands, 8–11 June 2015.
Dutton, N.A.; Gyongy, I.; Parmesan, L.; Gnecchi, S.; Calder, N.; Rae, B.; Pellegrini, S.; Grant, L.A.; Henderson, R.K. A SPAD-based QVGA image sensor for single-photon counting and quanta imaging.
IEEE Trans. Electron Dev. 2016,
63, 189–196.
Some related papers are in this special issue, but not really on QIS image reconstruction. Still you might find it interesting to browse. My own paper should be published this week or next in this issue. I am not involved whatsoever in the review and editorial process for my own paper so not sure of the timing. Stay tuned for that.
And after spending some time with Perona at Caltech, he and his PhD student Bo Chen looked at this related work which I personally find pretty interesting. It is down near the bottom of the list of the special issue articles:
Chen B, Perona P. Vision without the Image.
Sensors. 2016; 16(4):484