Hannu108: They doubt nobody ever landed on the Moon. Even a photo of a waving flag seems to be a fake.
“On the moon, there's no air to breathe, no breezes to make the flags planted there by the Apollo missions flutter”
http://science.nasa.gov/science-news/science-at-nasa/2001/ast23feb_2/"Not every waving flag needs a breeze -- at least not in space. When astronauts were planting the flagpole they rotated it back and forth to better penetrate the lunar soil (anyone who's set a blunt tent-post will know how this works). So of course the flag waved! Unfurling a piece of rolled-up cloth with stored angular momentum will naturally result in waves and ripples -- no breeze required!"
Edgar Matias: They should put an EVF in this thing.
That way you'd have the option of using it WITH or WITHOUT the iPhone.
quiquae: You might think so when you're still new to the art. But as you advance, you'll discover that remembering stuff is a waste of brain capacity. Now quit bothering me, my navel needs contemplating.
A real photographer should be able to visualise the histogram, horizon line, motion blur, DOF, and focusing area selection. And if you truly know RAW, you should have no problem creating a custom profile to batch-fix focusing and framing errors and camera-shake blur.
I can totally see people using this without iPhone. There's purity in hand held, all-auto blind shooting at 40MP.
Just Ed: The problem I have with Zeiss is that the lenses require manual focusing. That' would be ok, but most DSLR ovf screens are not precise enough for quick accurate focus, they seem mostly geared for brightness. To make good use of these one would do better with a precision matt screen or if available a split screen element on the focusing screen..ala 1960's. I think this would be particularily true at the 50 mp level.
Modern DSLRs offer live view with magnification and focus peeking. Those are much more precise and easier to use than the old-style focusing aids.
sh10453: It looks like "they actually DO know what to do with it", if you look beyond consumer photography.
The technology seems to have already found its way to the consumer DSLR cameras.
Here is a 120MP DSLR already in development:http://www.canon.com/news/2015/sep08e2.html.
As for surveillance, it certainly will not be mounted at a neighborhood gas station.Spy agencies, such as the CIA, its Russian/British/Chinese/... counterparts, and the like, are examples of likely customers to use it in their spy satellites or other aircraft.NASA, the military, mapping, and scientific research labs are other examples.
OK, OK, I agree, some can afford to install it in the babies bedroom to watch the nanny remotely, on a cell-phone!
sh10453: Could be, but doesn't seem likely. This sensor's specs don't really sound like it was specifically designed to be a part of a satellite sensor array.
Or at least, it's drastically different from what they currently use, which is wide-spectrum (from near IR to UW), low-resolution (think 1000x256 pixel), low-wattage components.
Just a Photographer: 250MP and APS-H is definitely diffraction limited...
"At 250MP a lens might well be only usable between f4 and f.5.6."Not all shots require maximum sharpness. It depends on the intended use of the image.This is like saying that cameras with resolutions beyond 4Mpix are only usable on a tripod.
This is not nearly enough for a spy satellite (you don't mount an APS-H sensor behind a 2.4 meter mirror), but a spy drone could use one.
This could translate into a fixed-lens subcompact with digital zoom that's usable to 6x (in a pinch, to 10x). A 16mm f:2.8 lens could be made tiny (think Sony pancake). At APS-H 1.3 crop factor, it would cover 21 to 150mm.
A fixed focal length lens could be made waterproof without the IQ compromises that come with the "tough" cameras' folded light path.
I'd buy one.
dwill23: Think facial recognition from far away, being able to 'see' hundreds of faces at once, (without having to zoom in on just a few). But you wouldn't be able to upload the images fast enough (maybe with fiber) to get feedback. Forget about local facial databases. So maybe this would work for that application but likely not in real-time because of huge files and thus bandwidth limitations.
But I'll take one and play around with it if Canon wants :)
They could preprocess each frame to extract just the recognizable facial characteristics (a set of measurements for the 80 or so "nodal points" that describe the face). Then download just those 80 bytes per face - with good compression, I'd say half that.
From what I've seen on the net, facial recognition needs at least 50 native pixels between the eyes, 75 is preferred. That means at least 200x200 pixels per person. So the most faces you could possibly extract from one frame is about six thousand (19580*12600/40000=6167.7).
That works out to about 250KBytes per frame - when the entire field of view is evenly filled with people, which is not a realistic scenario. At 5 frames per second, that's 1.25MB/s.
An IR comm laser will handle this many times over.
TL, DR: Should work just fine in real time, but a sizable crowd will overload the system.
BorisK1: Part of the problem is the intended use. In most common scenarios, if the image is purely for the web, a dedicated camera is overkill.
If you're making a 400x320 thumbnail, a $2000 lens will not do any better than a $20 software-corrected chunk of plexiglass. And it will be heavy and clunky.
@photofisher:"Most of my friends still hire pros for special occasions and for portraits. They easily appreciate the quality and skill of a pro with pro gear and are willing to pay for it even though they print very little and just enjoy them on their screens."
Right. But your friends don't buy pro-level studio setups themselves, do they? Which is the opposite of what happened to the camera market in general.
For a few years, a large number of people started buying digital cameras, creating a huge market surge. Then, as cellphone cameras became good enough for casual use, the bottom dropped out from that market.
It's not that the cellphones are as good as the dedicated cameras. They are merely good enough for the intended use.
@ photofisher:Resolution doesn't matter but things like dynamic range, depth of field, and rendering do.They matter, but only to a tiny chunk of the market. They don't matter to enough people to sell dedicated cameras (I'm not even talking of camera systems).
@Henrik Herranen:Depth of field is related to magnification. With smaller images, magnification is smaller, so DOF is deeper. On a thumbnail-sized image you hardly get any shallow-DOF effects.Besides, shallow DOF effects are of interest only to pros and a handful of enthusiasts. The majority of the pictures shared on the web do not use, and would not benefit, from shallow DOF. Shallow DOF cannot sell cameras to every web user.
The difference has to do with the size of the market. The number of people who owned film cameras and printed pictures, even in 4x3 size, was an order of magnitude smaller than the number of people "sharing" pictures on the web today. The digital camera industry got a good chunk of this new market for a while, but it didn't last, because for this market's needs, a dedicated camera is overkill, as long as a cellphone is good enough.
Which caused a related problem - it's very difficult for a company to adopt to a shrinking market. If you have a cohesive team of 100 engineers in R&D, but the new market will only support 10, you can't just fire 90% of them and expect everything to keep going.
Part of the problem is the intended use. In most common scenarios, if the image is purely for the web, a dedicated camera is overkill.
otto k: Regarding counting resets, this is not the first time this has been proposed, there are at least two forum members here that have been working on similar approach for years. It's not easy as you somehow have to fit a complete counter behind every "pixel" and that is not trivial (also some way to drain very fast individual pixels, also not trivial). Second approach discards the counting of resets and uses software algorithm to reconstruct the original image from essentially having just 8 least significant bits for every pixel (can try this yourself by zeroing first 6 bits of 14 bit raw file).
Now hush, because if Sony hears about this way to compress raws...
@otto k: I certainly don't have enough knowledge in VLSI design to have strong opinions, and I can do only so much arm waving before my shoulders get sore.
Agreed, let's leave it be :-)
@otto k @: "single level comparator is called transistor"As I understand it, sensor elements measure light by accumulating charge, like capacitors. If you have a capacitor attached to a transistor to measure the level of charge, wouldn't it cause a constant drain, making the sensor non-linear?"but you don't need it, just a photo diode, a very very small one"How would a photo diode compare two voltage levels? Sorry, I'm lost.
Anyway, what is the overall point you're making?
Are you saying that ADCs *are* easier to implement on sensor level than digital counters are?
If that's your field, and you know this for a fact, just say so - I'll take your word for it.
I've heard of this thing called SRAM memory, that seems to have a lot of single-bit storage elements without "heating up like crazy". I have not heard of sensors with pixel-level ADCs.
But again, you're probably right. I just can't figure out what is it that are you right *about*.
@otto k and Roland Karllsson:What I meant was that counters in CMOS are *easier* than comparators and ADC converters. I don't think you're arguing that point, are you?
"just put a 24 bit counter with input tied to some sort of photon detector [...] That would heat up like crazy and introduce all sorts of problems. Even with 100 transistors per counter you are at 2.4 billion."Yes it would. A low capacity photon detector would be constantly overflowing and triggering the nearest counter, so in your design, the low-bit counters in each pixel would be constantly flipping back and forth.Also, that photon detector would still need to decide when to trigger the transistor, so you'd still need a comparator - in each pixel.So I agree, the design you proposed doesn't sound workable.
Not sure why you'd need synchronous counters though - you're driven by the input, not by the clock.
A counter is easy. It's a digital circuit, and can be made tiny. Computer CPUs are full of the things.
The problem is the circuit that *decides* that the pixel is full and it's time to reset. That circuit is analog, and it's much harder to make small.
ThePhilips: The "modulo" idea is so obvious, that I think that most makers have already thought about it but put it in the back due to some technical complication.
Otherwise, I prefer the other idea, where pixel's charge data are being read continuously. IOW, sensor sends the data continuously, and the "shutter speed" is just how long the firmware keeps accumulating the data before saying "enough". That removes the overflow completely. And also allows to selectively read more/less from shadows/highlights.
" put it in the back due to some technical complication"From what I've been told, it's very difficult to miniaturize a circuit that would determine whether the pixel reached a predetermined voltage. In essence, it's a full-blown analog-to-digital converter, albeit with 1-bit output.The real news is that MIT geniuses managed to build one into a pixel.
johnsmith404: I guess at this point everyone and his dog has though about this...
In terms of results this isn't really different from a recent Olympus patent which is centered around the idea of outputting a normalized sum of several exposures.
This one would have the advantage that you could compress high intensity values into a simple number of resets + modulo but if you aren't memory limited it probably won't make any difference. Even if you took the less sophisticated approach of simply adding exposures, you only need to keep track of 2 full res images at most. Another advantage is that you could never blow out anything... but I guess that isn't really relevant when any approach gives you potentially unlimited DR.
I don't really care about the final implementation but I'm quite excited about the prospect of getting super low ISOs. No need to carry those 10 stop NDs anymore + much more DR.
"In terms of results this isn't really different from a recent Olympus patent which is centered around the idea of outputting a normalized sum of several exposures."-- except with this approach, you don't get the merge artifacts. If something is moving, its dark and bright parts get the same amount of smearing.