duartix
Lives in
Lisboa, Portugal
Works as a
IT Engineer
Joined on
Mar 6, 2007

duartix: I've been a DSLR user for around 10 years and my view on this is a bit different.
The main reasons I've been favoring my smartphone instead of my camera are these:
1  It's always around.
2  I can do advanced PP on my pictures immediately.
3  I can share them with friends infinitely faster and easier.
Regarding item 2.
I've been using Capture One Pro on PC since version 5 and I still regard it as the best RAW developer around, yet, what I did in C1 to adjust my RAWs, I can do 5x quicker for y JPEGs in Snapseed for Android with a plethora of filters that is 10x bigger.
95% of us aren't PROs like you, and thus have relaxed demands when it comes to resolution, per pixel sharpness/detail, DR and the like.
To each his own.
Don't get me wrong. I still shoot RAW exclusively on my GH2 and I carry it with me when travelling and on special occasions, but I'm migrating downwards for 90% of my everyday photography.
It could also be that I'm taking 900% more photos, but the truth lies in the middle.
Cheers!
I've been a DSLR user for around 10 years and my view on this is a bit different.
The main reasons I've been favoring my smartphone instead of my camera are these:
1  It's always around.
2  I can do advanced PP on my pictures immediately.
3  I can share them with friends infinitely faster and easier.
falconeyes: I now studied the original research paper and underlying math. Thinking about it now, the approach is more straightforward than it first appears.
The authors apply a cut graph algorithm ( https://en.wikipedia.org/wiki/Cut_%28graph_theory%29 ) which is the new kid on the block since about 10 years. E.g., the algorithm can solve the noise reduction problem for a 1bit deep image *exactly*. They apply this algorithm about 2^(n+1) times to solve the problem for n extra bits in the clipped highlight region (for 2^n rollovers). The paper is all about this algorithm.
Unfortunately, the approach cannot provide infinite bit depth. It can only less than double the bit depth of the modulo camera sensor (due to the statistical properties of photon shot noise). Moreover, it requires a high spatial resolution to work for higher values of extra bits which means, the algorithm quickly becomes expensive. I expect the practical limit to be around 12 bits extending to 18 bits which is ISO 6,
Good show. Unfortunately I can't read German. :(
P.S. Are you familiar with this?
https://helpx.adobe.com/photoshop/howto/focusmaskselections.html
falconeyes: I now studied the original research paper and underlying math. Thinking about it now, the approach is more straightforward than it first appears.
The authors apply a cut graph algorithm ( https://en.wikipedia.org/wiki/Cut_%28graph_theory%29 ) which is the new kid on the block since about 10 years. E.g., the algorithm can solve the noise reduction problem for a 1bit deep image *exactly*. They apply this algorithm about 2^(n+1) times to solve the problem for n extra bits in the clipped highlight region (for 2^n rollovers). The paper is all about this algorithm.
Unfortunately, the approach cannot provide infinite bit depth. It can only less than double the bit depth of the modulo camera sensor (due to the statistical properties of photon shot noise). Moreover, it requires a high spatial resolution to work for higher values of extra bits which means, the algorithm quickly becomes expensive. I expect the practical limit to be around 12 bits extending to 18 bits which is ISO 6,
Thanks for the reply.
I didn't delve into the depths of the algo, but my first instinct would be to represent the rollover maps as raster bitmaps instead of contours. That would be quite cheap in terms of resources as you would only need the equivalent of an 8bit image to represent #8 rollovers.
Hum... that's just 3 stops above the unrolled precision... I see your point, every extra bit demands twice as much memory.
falconeyes: I now studied the original research paper and underlying math. Thinking about it now, the approach is more straightforward than it first appears.
The authors apply a cut graph algorithm ( https://en.wikipedia.org/wiki/Cut_%28graph_theory%29 ) which is the new kid on the block since about 10 years. E.g., the algorithm can solve the noise reduction problem for a 1bit deep image *exactly*. They apply this algorithm about 2^(n+1) times to solve the problem for n extra bits in the clipped highlight region (for 2^n rollovers). The paper is all about this algorithm.
Unfortunately, the approach cannot provide infinite bit depth. It can only less than double the bit depth of the modulo camera sensor (due to the statistical properties of photon shot noise). Moreover, it requires a high spatial resolution to work for higher values of extra bits which means, the algorithm quickly becomes expensive. I expect the practical limit to be around 12 bits extending to 18 bits which is ISO 6,
Needs "high spatial resolution"? Why?
I read it too (in a hurry, I admit) and was under the impression that this was all doable with integer and set math, and highly parallelizable too. Expensive?
Lassoni: I'm sorry to say this, but the results look terribly bad. Only the first image looked interesting, but the rest are terrible.
What size of shoes are you?
dennis tennis: As a DPR forum participant, I look down at anything that isn't mine. I don't understand the math, I don't understand the tech, but I'm sure it isn't good enough and that had I tried I would have done better. I could have looked up the original research papers but that would be a waste of my precious time, because I already know that if it isn't mine, it can't be good enough for me.
Amen!
PolarBear17: More on the technical side: As people already mentioned  the "ModuluCamera" as it was "branded" by MIT is not new:
Wang, Xiuling, Winnifred Wong, and Richard Hornsey. "A high dynamic range CMOS image sensor with inpixel lighttofrequency conversion." IEEE Transactions on Electron Devices 53.12 (2006): 2988.
Pixel reset approach I believe goes back to JPL in the 90th. I'm not 100% sure about the origin.
Still, what ARE the differences between these similar approaches?
MIT method is the most "sophisticated", as well as the most useless one. They read only the last value of the cell (hence modulu), and then need to go through a fragile computational process that reconstruct the image.
JPL approach is simplicity itself  count the numberof resets and return this number.
Wang's approach  different, yet effective. bright areas not only fill the "bucket" more times  they also fill it FASTER  thus the FREQUENCY of the pixel reset  is returned.
I wouldn't say it's useless or fragile. Even though I can see guess how it could possibly fail, when you can correctly estimate the #resets (which will be 99,9% of the pictures people take) you can eliminate intermediate readouts, which when you are designing the circuitry is MAJOR.
As far as I can see, the modulus approach has two enormous advantages when compared to a frequency/sum of resets readout, provided the #resets are correctly estimated.
* 1  Hardware logic simplicity. That translates into less noise.
* 2  Precision. That translates directly into DR.
johnsmith404: I guess at this point everyone and his dog has though about this...
In terms of results this isn't really different from a recent Olympus patent which is centered around the idea of outputting a normalized sum of several exposures.
This one would have the advantage that you could compress high intensity values into a simple number of resets + modulo but if you aren't memory limited it probably won't make any difference. Even if you took the less sophisticated approach of simply adding exposures, you only need to keep track of 2 full res images at most. Another advantage is that you could never blow out anything... but I guess that isn't really relevant when any approach gives you potentially unlimited DR.
I don't really care about the final implementation but I'm quite excited about the prospect of getting super low ISOs. No need to carry those 10 stop NDs anymore + much more DR.
It's quite trivial in terms of processing power. Mathematically it might sound somehow complex, but in essence it's really simple.
IIRC what it does is basically an iterative process where you try to minimize energy transitions (differences) between pixel overflows, exclusively through integer and set math. From what I understood, it might take as many iterations as overflows, but the hardware requirements are pretty low and it might be easily parallelizable.
http://web.media.mit.edu/~hangzhao/papers/moduloUHDR.pdf
TriezeA72: If it was German granite, it would of had no chance
Leica cameras are made out of Portuguese granite.
http://leicarumors.com/2013/03/22/leicacameraagopensnewplantinportugal.aspx/
johnsmith404: I guess at this point everyone and his dog has though about this...
In terms of results this isn't really different from a recent Olympus patent which is centered around the idea of outputting a normalized sum of several exposures.
This one would have the advantage that you could compress high intensity values into a simple number of resets + modulo but if you aren't memory limited it probably won't make any difference. Even if you took the less sophisticated approach of simply adding exposures, you only need to keep track of 2 full res images at most. Another advantage is that you could never blow out anything... but I guess that isn't really relevant when any approach gives you potentially unlimited DR.
I don't really care about the final implementation but I'm quite excited about the prospect of getting super low ISOs. No need to carry those 10 stop NDs anymore + much more DR.
I guess the biggest advantage here is that the readout is still extremely simple (in terms of circuitry and control logic) since you can rely on just one, provided there is reasonable precision (8bit should be probably enough).
OTOH, I wonder what their energy minimization algo would do to a starry night picture with stars nearly pixel sized, and the overflowed modulus pixels with low values... :O
ThePhilips: The "modulo" idea is so obvious, that I think that most makers have already thought about it but put it in the back due to some technical complication.
Otherwise, I prefer the other idea, where pixel's charge data are being read continuously. IOW, sensor sends the data continuously, and the "shutter speed" is just how long the firmware keeps accumulating the data before saying "enough". That removes the overflow completely. And also allows to selectively read more/less from shadows/highlights.
I see what you mean, but it might overwhelm the sensor's local circuitry and also complicate the readout process. It might compromise the precision in highlights, which may or may not be a problem (posterization just occurred to me), but probably not.
It also induces other issues as different pixels have different exposure values (might be an issue with subjects in motion) and I can only begin wondering what kind of weird artifacts might arise (things even stranger than rolling shutter).
The modulus camera isn't without issues either (search my post above for "gradients") but in the end, it looks like an extremely simple hardware approach, with very little limitations and demands on the sensor it self.
mikedodd44: Presumably you can't do this with a firmware update on existing cameras?
No.
It must be an inherited characteristic of the sensor. AFAIK commonly used sensors do saturate and cannot reset on a per pixel base.
Amazing and simple concept, idea and implementation.
Now let's hope we can get an execution at similar standards.
Simulating the concept and developing seems utterly trivial once they knocked the unwrapping, yet ...
... what I'm really curious about is how extensively they've run their algo and how it will work against long exposure noisy images. I can't see a trivial solution on their concept modulus camera on how to perform dark frame subtraction and noisy images won't provide the smooth gradients their energy minimization algo strives on, so this might become be either too much of a challenge, compromise or limitation.
Paleeeeeeease...
Enough with this "Deutschland über alles" prejudice already, as with the "Chinese stuff sucks"!
Leica cameras are made in Portugal from quite some time and apart from wind turbines and toothpicks, there is nothing we (Portuguese) can't make out of granite either. ;)
https://upload.wikimedia.org/wikipedia/commons/9/94/Castelo_de_Sortelha%2C_Portugal__Apr_2011.jpg
Charlie boots: This is very interesting as it should also be very useful in criminal investigations to extract faces from the reflections.
You've been watching too much CSI. ;)
If you have multiple samples (which means we're talking video) you can just manually pick the frame of interest with less reflections.
Otherwise, it's just superresolution, which has already been there for ages.
johnsmith404: This might be the future of photography. Taking multiple 'sequential' exposures and extracting all kinds of information is the way the human brain does it vision job. Related mechanisms lead to high resolution (hyperacuity) and low noise.
I wonder if I'll live long enough to see the first neuromorph processing engines in cameras.
When @newe mentions smart drones, he means that when at the beach, they will be able to only lock on to the topless women.
Hurray for tech.
Mike5076: Nifty, I thought I saw another application/function for taking multiple pics of the same scene (stationary camera) and the software would remove any transient objects (tourists). Similar but now with a moving camera?
The thing here is that you are shifting the position, but it shouldn't be much of a challenge. I've tried it in A Better Camera and it works very well, it's even orders of magnitude faster than it's PC counterpart (Photoacute).
Photoacute is been doing this for 10+ years with stacking.
I've tested A Better Camera's (uses the same tech) and it's object removal also works very well with obtrusions like the fence.
Can't say about reflections but I can't see why it shouldn't work.
ludwik123: Sounds like a fantastic new sensor.
Another nail in the coffin of the FF Canikon DSLR'S.
Mobile phone IQ is approaching FF very quickly.
LOL.
I can see the fumes from this distance.