alfpang: Looking forward to real world test samples of this, and to buying Mark III.
I wonder, if it works out esp for consumer / prosumer level imaging, whether we'll see a new race in number of compound eyes/cameras rather than megapixels.
Is there a physical limit to how many lenses can be used for this? Can it scale up?
"There, fixed it for you. :-)"
But seriously - what English word do you use to denote the present structure / composition of something, without a reference to how it arrived to that structure / composition?
Evolved, designed, planned, constructed, "put together"...
Cephalopods have the least defective eyes on the planet!
Oh, and about the eye evolution: I just looked it up, and apparently, cephalopod fossils started appearing in the late Cambrian - 500 mln years ago. Today, cephalopods have the best-designed eyes on the planet (unlike ours, their optic nerve connects to the cornea from the outside, so they get better light sensitivity and no blind spot. A kind of biological BSI).
That's about 100 mln earlier than the oldest known insect fossil, and about 20mln years before the estimated time insects appeared as a class.
As I understand it, evolution doesn't necessarily give an optimal solution to a problem. It basically uses whatever's available to ease whatever pressure happens to be at the moment. It's suboptimal to have the nerves and blood vessels in front of the optic cells, but it was good enough at the time, and is really hard to change now ;-).
"Mixed focal lenghts don't change the argument"I wasn't arguing, just asking for your opinion on the reasonable size of such an array.
"Just consider each group of lenses for a given focal length as an array."Yikes! If 5x5 is reasonable for a single FL, that would call for 75-100 sensors for 3-4 FLs. Doesn't sound realistic for a phone-sized device.
Zvonimir Tosic: It would require a power of a supercomputer to run even 4 lenses. Where all the data would be stored too? I had once drawn and planned a similar idea, but it had three lenses only, wide, normal and semi-tele, each had it's own focusing point fed by the data from the widest lens. Each lens could be used separately, or in concert with others. But when the computing power had to be calculated to make the thing work seamlessly, and still remain compact, it was enormous. It is much easier to create just one great lens, insert one great sensor with enough MPs, keep the temperature down of the camera, save on power, and do trimming as needed from the wide lens. Like the Leica Q.With an array of tiny lenses there is no DoF control, no aperture control, no shutter control for each, all is just "snapshot photography" which is no fun at all.
"It would require a power of a supercomputer to run even 4 lenses"The mobile CPU/GPU hardware has been advancing at a huge pace. Are you sure it's still not feasible these days? Especially if you leave the fancier tricks for postprocessing?
"It is much easier to create just one great lens, insert one great sensor with enough MPs"Yes, but that great lens is not going to fit into a pocket, and won't record the depth map.
"With an array of tiny lenses there is no DoF control, no aperture control, no shutter control for each, all is just "snapshot photography" which is no fun at all."A depth map allows simulating the DOF control.
Aside from DOF, you use aperture to control the light levels - which in this case, will have to be done by ISO and exposure duration.
"no shutter control for each"With an array, you very well could have a mode where different sensors use different ISO and perhaps even exposure durations. Not just HDR - you could adjust *motion blur* in postprocessing.
"BTW, that's a reason why evolution eventually moved away from the compound eye. And settled for between 2 and 8 eyes"Well, I wouldn't say that evolution "moved away" from insects - they are doing just fine, eyes and all ;-)
"Personally, I consider 5x5 arrays, maybe 6x6 to be a good balance."What about the mixed-FL arrays, like the Light here? Is there a reasonable limit?
Reinhard136: It doesn't make you want to invest in conventional camera companies or their products in a big way does it ? Whether this one works or not, it does make it more than imaginable that in 5 or 10 years the companies on top of the heap now, or that cupboard full of their products, will look as valuable as an old film camera does now ...... and worse, with a bit of clever computing, it may not require any break thru technology, just the nous to go and get a bundle of 1/3 sensors and some fixed lenses or something similar.
It depends on what you mean by "invest". If you make money off photography, a "light" camera is probably not the best tool, except for some very unique circumstances.
If it's a luxury item - a hobby/vacation camera - then it's not an investment.
If you mean that this camera will kill off conventional camera companies, I wouldn't think so. At most, a Sony or a Nikon will buy Light, or develop a similar solution in house. Like Nokia invested in Pelican Imaging.
sh10453: Lite deserves credit for the innovation, but I doubt that this camera will appeal to many. Lytro is struggling to sell their camera, and recently it was selling at very heavily discounted price.
I hope a start-up group/company will someday concentrate their effort on a Medium Format camera. I'd think there would be a lot of interest in such camera if they approach the design in a new and innovative idea that keeps the field photographer in mind (as opposed to the studio / tripod photographer), as well as the careful selection of lens mount.If they do it right, and the price is significantly less than that of the big names, that would be a game changer in the Medium Format category.
In that case, I'd be happy to send my $200 deposit.
This is no Lytro. Still shots from Lytro have very poor IQ and low resolution. It can't handle low light. Lytro's only selling point is the distance map.
Lite promises the distance map *on top of* large-sensor-level light gathering capability, plus the form factor of a smartphone.
I don't see the price going down though, unless they fail completely and have to sell off the remaining stock. This body has to be quite complex mechanically, what with 16 lenses to autofocus in concert.
bmoag: This camera will be a very difficult sell to its intended target (indicated by the videos) in large volumes at this price point. It does not matter how good or innovative this camera is in a world where people have been taught to think their phones are high end cameras. The battle for the middle and low end of the camera market is over, the phones won. In any event with the popularity of ever larger phones there is room to adapt this technology into that form factor.
bmoag "intended target (indicated by the videos)"Well, the scenarios in the video are about what you'd use a low-to-mid-range mirrorless body with a "standard" zoom. The added capabilities can offset the higher price (pocketability, ruggedness, higher resolution, depth map that allows things like bokeh simulation and post-shot selective focusing).
I'd say, it would be a welcome addition to the "enthusiast fixed-lens" and "travel camera" market segments, which are both relatively unaffected by the competition from the smartphone.
BTW, since they must be using off-the-shelf hardware, the back of the camera is very likely to *be* a smartphone, or at least a tablet.
If the battery life is any good, a backpacker or a mountaineer will be delighted.
Back to your point though - of course we don't know how large is their target sales volume, but they are not a huge company. I doubt they are planning to replace Sony. If they manage to sell enough units to make a living, good for them.
Sounds extremely promising. Their timing for this type of camera is excellent as well: The market is overflowing with cheap, high-performance, low-power mobile CPU/GPU systems.
The depth map data will allow for some new applications as well. 3D movies? One-shot models for 3D printers? Time will tell.
However, camera arrays aren't easy. Many have tried (cough Nokia cough), but so far, there aren't that many success stories aside from satellite sensor arrays. There are both hardware and software challenges. How do you focus 16 lenses all at the same time? How precisely do they need to be aligned when making the unit? How do you get reasonable battery life? How do you flash-sync 16 sensors? How do you handle bright light (kind of hard to handle 16 ND filters simultaneously)? How do you handle sixteen simultaneous video streams? What about AF during video?
falconeyes: From the video linked in my below comment:
They use:5x 35mm F2.45x 70mm F2.46x 150mm F2.4with up to 10 cameras contributing per image. No word on crop factor but I would assume a 7.6x smart phone crop factor.
Because lenses are up to 50mm apart, the bokeh will roughly correspond to150mm F2.8 full frame, more with computed bokeh from the capturable depth map.
The combined noise (iso capability) will roughly be that of anF7.4 lens full frame, a bit better than an APSC kit lens would yield.
However, the real competitor of this thing won't be dedicated cameras but smartphones which soon will sport camera arrays too.
Moreover, I do hope this thing isn't too thick (it is thicker than a smartphone would be allowed to be) and that they can compute 24mm (or less) wide angle from their 5 35mm lenses.
MeganV: "I wonder why none of the sample photographs released so far show an example of it?"
Camera arrays have a much bigger "software component" than single-sensor cameras. Most likely, they are using a first-draft beta version of the software.
Niceties like bokeh simulation may be still in the works. The raw files should already contain the distance map that makes it possible, but the conversion might not be ready for prime time yet.
Hannu108: They doubt nobody ever landed on the Moon. Even a photo of a waving flag seems to be a fake.
“On the moon, there's no air to breathe, no breezes to make the flags planted there by the Apollo missions flutter”
http://science.nasa.gov/science-news/science-at-nasa/2001/ast23feb_2/"Not every waving flag needs a breeze -- at least not in space. When astronauts were planting the flagpole they rotated it back and forth to better penetrate the lunar soil (anyone who's set a blunt tent-post will know how this works). So of course the flag waved! Unfurling a piece of rolled-up cloth with stored angular momentum will naturally result in waves and ripples -- no breeze required!"
Edgar Matias: They should put an EVF in this thing.
That way you'd have the option of using it WITH or WITHOUT the iPhone.
quiquae: You might think so when you're still new to the art. But as you advance, you'll discover that remembering stuff is a waste of brain capacity. Now quit bothering me, my navel needs contemplating.
A real photographer should be able to visualise the histogram, horizon line, motion blur, DOF, and focusing area selection. And if you truly know RAW, you should have no problem creating a custom profile to batch-fix focusing and framing errors and camera-shake blur.
I can totally see people using this without iPhone. There's purity in hand held, all-auto blind shooting at 40MP.
Just Ed: The problem I have with Zeiss is that the lenses require manual focusing. That' would be ok, but most DSLR ovf screens are not precise enough for quick accurate focus, they seem mostly geared for brightness. To make good use of these one would do better with a precision matt screen or if available a split screen element on the focusing screen..ala 1960's. I think this would be particularily true at the 50 mp level.
Modern DSLRs offer live view with magnification and focus peeking. Those are much more precise and easier to use than the old-style focusing aids.
sh10453: It looks like "they actually DO know what to do with it", if you look beyond consumer photography.
The technology seems to have already found its way to the consumer DSLR cameras.
Here is a 120MP DSLR already in development:http://www.canon.com/news/2015/sep08e2.html.
As for surveillance, it certainly will not be mounted at a neighborhood gas station.Spy agencies, such as the CIA, its Russian/British/Chinese/... counterparts, and the like, are examples of likely customers to use it in their spy satellites or other aircraft.NASA, the military, mapping, and scientific research labs are other examples.
OK, OK, I agree, some can afford to install it in the babies bedroom to watch the nanny remotely, on a cell-phone!
sh10453: Could be, but doesn't seem likely. This sensor's specs don't really sound like it was specifically designed to be a part of a satellite sensor array.
Or at least, it's drastically different from what they currently use, which is wide-spectrum (from near IR to UW), low-resolution (think 1000x256 pixel), low-wattage components.
Just a Photographer: 250MP and APS-H is definitely diffraction limited...
"At 250MP a lens might well be only usable between f4 and f.5.6."Not all shots require maximum sharpness. It depends on the intended use of the image.This is like saying that cameras with resolutions beyond 4Mpix are only usable on a tripod.
This is not nearly enough for a spy satellite (you don't mount an APS-H sensor behind a 2.4 meter mirror), but a spy drone could use one.
This could translate into a fixed-lens subcompact with digital zoom that's usable to 6x (in a pinch, to 10x). A 16mm f:2.8 lens could be made tiny (think Sony pancake). At APS-H 1.3 crop factor, it would cover 21 to 150mm.
A fixed focal length lens could be made waterproof without the IQ compromises that come with the "tough" cameras' folded light path.
I'd buy one.
dwill23: Think facial recognition from far away, being able to 'see' hundreds of faces at once, (without having to zoom in on just a few). But you wouldn't be able to upload the images fast enough (maybe with fiber) to get feedback. Forget about local facial databases. So maybe this would work for that application but likely not in real-time because of huge files and thus bandwidth limitations.
But I'll take one and play around with it if Canon wants :)
They could preprocess each frame to extract just the recognizable facial characteristics (a set of measurements for the 80 or so "nodal points" that describe the face). Then download just those 80 bytes per face - with good compression, I'd say half that.
From what I've seen on the net, facial recognition needs at least 50 native pixels between the eyes, 75 is preferred. That means at least 200x200 pixels per person. So the most faces you could possibly extract from one frame is about six thousand (19580*12600/40000=6167.7).
That works out to about 250KBytes per frame - when the entire field of view is evenly filled with people, which is not a realistic scenario. At 5 frames per second, that's 1.25MB/s.
An IR comm laser will handle this many times over.
TL, DR: Should work just fine in real time, but a sizable crowd will overload the system.
BorisK1: Part of the problem is the intended use. In most common scenarios, if the image is purely for the web, a dedicated camera is overkill.
If you're making a 400x320 thumbnail, a $2000 lens will not do any better than a $20 software-corrected chunk of plexiglass. And it will be heavy and clunky.
@photofisher:"Most of my friends still hire pros for special occasions and for portraits. They easily appreciate the quality and skill of a pro with pro gear and are willing to pay for it even though they print very little and just enjoy them on their screens."
Right. But your friends don't buy pro-level studio setups themselves, do they? Which is the opposite of what happened to the camera market in general.
For a few years, a large number of people started buying digital cameras, creating a huge market surge. Then, as cellphone cameras became good enough for casual use, the bottom dropped out from that market.
It's not that the cellphones are as good as the dedicated cameras. They are merely good enough for the intended use.