New challenger to the DSLR market

Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Slaps head. Why of course carrying around 12 sensors and lenses will always be cheaper than one. Why didn't I think of that, its so . . . obvious.
What do you mean "carrying around"? They're tiny. Please try to keep up.
They're $1500. More expensive than a superzoom and a nice iPhone, but without the features. Try to stay focused on the subject.

Oh, you do know that you can carry tiny objects, right? There is no size implied by the word.

car·ryˈkerē/verbverb: carry; 3rd person present: carries; past tense: carried; past participle: carried; gerund or present participle: carrying1. support and move (someone or something) from one place to another.
You're wrong about weight and cost. The iPhone 7 camera costs Apple $26 per unit. An array of ten would only begin to approach what Canon spends making a single 24-70 zoom. A dictionary won't help you here. Time to slither away?
 
Last edited:
"5x optical zoom is true."

No it isn't

You can't have an optical zoom by jumping from one focal length to another jumping from 28mm to 70mm to 150mm.

What you have there are three separate focal length blended in software so it isn't an optical but a digital zoom. it takes 5x28mm shots, and 5x70mm, and fuses them. It's taking info from all of those. Yes, it's more like primes than a zoom,
Perhaps "zoom" is wrong in the sense that nothing zooms. But it does have different focal lengths. And it takes ONE shot with say 5x28mm and 5x70mm and then blends those. That so it has real info captured at different focal lengths and with different framing, as opposed to just cropping post. It then combines that info.

Some here seem to not be too familiar with stacking and layering and pixel shifting and the fact that these are existing technologies that work very well. Arrays of lower resolution cameras to create one high resolution image isn't new at all. The real difference here is that the computer in the camera and the lens/sensors have gotten good enough to throw this in one package.

In the tech specs it lists the three lens focal lengths, not zoom, btw, although some promo stuff says 5x zoom. I don't consider my bag of primes x-zoom, but I can see why they'd use that term instead of trying to explain what it does in reality in a promo.

I dunno why some people get so defensive about this stuff, especially since lots of us who use DSLRs and M43s and such employ these same techniques. I've already got a two lens camera in my phone too. And a two lens 360 camera. Great stuff; I wish them success.

--
“Art is not what you see, but what you make others see.”
— Edgar Degas
OK, let's look at staking.

This is the camera

3fb4707815a24eedb2e5489cc740585f.jpg

my guess is that it is about 9x16cm

Take a look of where the lenses are.

5x 28mm
No. If you zoom in, the text around the smallest lenses says "35mm wide angle".
5x 70mm

5x 150mm

Now make a print of that camera and punch a hole through the lenses and then take photos with your mobile phone through the holes keeping the camera parallel to the print but shifting your lens to the new position and crop to the center of your photos to simulate the 70mm and 150mm view.

Then stack those photos.

Let me know what happens.
Not what you think will happen. Stacking and stitching are done all the time today. This isn't science fiction. Look up the Brenizer Effect. The whole excitement is simply that in the near future it will be available in a small, somewhat affordable package (not this particular camera), with output that could previously only be done with large heavy cameras and lenses.
 
That was a mock up of the original idea. Since then the 35mm has become (in theory) a 28mm lens.

Staking is certainly done but not from photos taken from different positions, if I am not mistaken.

Put the equivalent of a 150mm lens on a camera on a tripod. Take a photo.

Now move that lens up /down or sideways to match the position of the one on the L16. Take a photo from each position, then stack them.

Keep in mind that the L16 lenses don't move up/down/sideways so you need to keep the lens parallel to the first shot.

The 150mm lenses are one on top of the other on the left side (the top one is off side to allow for the Light logo...)) the fifth one is about 10cm away on the other side.

d6713b78a452485a83c3bbb899e7f662.jpg

Maybe I am not explaining the point very well.

Maybe this will help.

This is a typical (old time) stereo camera.



6f5154ce290a44f0af10cc09f14f706f.jpg



The distance between the two lenses is about 6cm, approximating the distance between our eyes.

The L16 has lenses of the same type (say the 28mm) further from each other than that.

So my question is, if you took a photo with that stereo camera and moved it a few cm three times (3x 2 photos) and then stacked those photos, what would you get ?
 
Last edited:
That was a mock up of the original idea. Since then the 35mm has become (in theory) a 28mm lens.

Staking is certainly done but not from photos taken from different positions, if I am not mistaken.

Put the equivalent of a 150mm lens on a camera on a tripod. Take a photo.

Now move that lens up /down or sideways to match the position of the one on the L16. Take a photo from each position, then stack them.

Keep in mind that the L16 lenses don't move up/down/sideways so you need to keep the lens parallel to the first shot.

The 150mm lenses are one on top of the other on the left side (the top one is off side to allow for the Light logo...)) the fifth one is about 10cm away on the other side.

d6713b78a452485a83c3bbb899e7f662.jpg

Maybe I am not explaining the point very well.

Maybe this will help.

This is a typical (old time) stereo camera.

6f5154ce290a44f0af10cc09f14f706f.jpg

The distance between the two lenses is about 6cm, approximating the distance between our eyes.

The L16 has lenses of the same type (say the 28mm) further from each other than that.

So my question is, if you took a photo with that stereo camera and moved it a few cm three times (3x 2 photos) and then stacked those photos, what would you get ?
What do you get when you pan your iPhone around (by hand) in panorama mode? That's stitching (and yes the stereo possibilities are there too, bonus). With stacking, the software will obviously have to correct for the slight offset in perspective. I take it you've never used modern post processing tools? In Snapseed (a free phone app) you can now change the direction someone's face is posed, after the fact.
 
What do you get when you pan your iPhone around (by hand) in panorama mode? That's stitching (and yes the stereo possibilities are there too, bonus). With stacking, the software will obviously have to correct for the slight offset in perspective. I take it you've never used modern post processing tools? In Snapseed (a free phone app) you can now change the direction someone's face is posed, after the fact.
Yes, this is correct. In fact, stacking only offers resolution benefits when there are slight changes to alignment. Ideally, this would be a sub-pixel shift movement, but slight changes in perspective are easily corrected for.

For example, many cameras and software can correct for lens distortion. So the outputted pixels are not exactly where they were captured, which affects the resolution of the details.

Earlier, I posted a link to this article . That article provided a technique and sample images to stack photos to output high resolution photos. Handheld. With slightly varied perspective between shots. The claimed improvement in that article was up to 4x spatial resolution increase--exactly the increase that this L16 is claiming (outputting a 52MP image from several 13MP sensors). Here are examples of the outputs from that article:

sn4UlIz.jpg


jcrkMbB.jpg


One difference for the L16 is that the cameras may have slightly more of a perspective difference; but the distances between the cameras are known and the shots are taken simultaneously--meaning it's possible that the camera may be able to calculate subject distances and offer appropriate perspective corrections.

Even if this isn't the method they've used, it's not unheard of technology to automatically align images that have varied perspectives . I've been doing it with Hugin for years (web examples):

screenshot-05.jpg


results_03.jpg


Remember that the last L16 prototype took something like 1 minute to output a 52MP image. Most of that time is likely processing--things like aligning & stacking.

I'm skeptical that the L16 will work flawlessly, and I do not believe that it will match the IQ of larger sensor cameras. But it is interesting in theory, and we'll see how it performs when it comes out.

My guess is that it will do better than many phone cameras and may be similar to larger sensor compacts in many respects. It's main priority is compact size, followed by maximizing IQ in areas you can't/couldn't get with a phone yet: varied focal lengths, low light performance, shallow DoF, higher resolution, etc.
 
Last edited:
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Like how 30 years ago it was predicted all new cars by now would fly?

The thing is way too small to replace ILCs. 5x zoom? My travel kit has a zoom range of about 100x, and my total ILC kit has a zoom range of about 1500x. How would they implement that in a thing this size?
I'm not talking about this particular camera, which is just an early prototype. Array cameras are coming, no matter how many tantrums you throw. And your beloved DSLR won't suddenly stop working.
Nor will they be replaced by an array cam. If the market has an interest - assuming the product makes it that far - it will find success. ILCs, however, aren't going anywhere for quite some time.
I said a decade, so we're in agreement.
Nope, more than a decade. I doubt I'll see ILCs disappear in my lifetime.
Who said "disappear"? It's funny how worried DSLR guys are these days. You can still buy a horse and a buggy to go with it, yes even today in 2017. They didn't "disappear".
Worried? Hardly. Just because DSLR guys do not buy into every flavor of the month technology doesn't make them Luddites. Maybe if there weren't so many fanboys so eager to see the DSLR demise and are willing to post about it endlessly or buy into every flaky technology, these guys wouldn't appear so defensive. How many threads were there called "DSLRs rule!" were there before the ML and MFT guys started going after DSLRs? Not many that I remember.

I would have no qualms in ditching the heavy gear if my compacts did the same thing, and will embrace it if we get there. And I don't think I am alone. We still aren't there.
 
LOL. The entire world is literally binary for you: everything either existentially threatens your DSLR, or it doesn't. If it doesn't, it's OK.
No, there's room for plenty of different technologies. The problem is, we have these persistent dolts that espouse the next xyz whatever will replace SLRs, or replace ILCs or replace lenses or whatever, and they do so without understanding a bit about physics or the way ILCs and SLRs are actually used.
Totally agree with this. This L16 is a perfect example. I don't know the specs on the sensor size, but I'd guess it's something like a 1/2" sensor or smaller. If this was the size, then a 4-module camera would capture the same light as a 1" sensor. So that's the maximum level of shot noise performance we could expect--it's a physical constraint.

The difference would obviously be that rather than the sensor concentrating all pixels in one area, it's split them into 4 chunks at different locations. This allows for stacking and true 3-D distance calculations (just like phase detect autofocus does).

So assuming that the L16 uses a 1/2" sensor, will it compete with a large-sensor ILC? Nope.

For example: physically, if they did want to try to compete with a FF ILC @ 24-70 F/2.8 with this method, they'd need to do something like building an array of 4x micro-four-thirds cameras, each with a 12-35mm F/2.8 zoom lens. Good luck getting that to be pocketable. :) Or, I suppose they could do 16x 1/2" sensors, each with a 6-17mm F/2.8 lens. Doesn't seem to be what's happening here.

So where will this L16 fit or compete? I think it will be competing:
  • With the 1" compacts & smaller (eg. phones) in terms of noise & portability.
  • In terms of resolving power, somewhere in the 'middle' sensor sizes (1" - APS-C).
  • In terms of DoF control, it should compete higher. Given its physical size, I'd estimate that in theory, it could compete somewhere around lenses with a 50mm aperture (eg. 100mm F/2, but with a wider FoV than this). This will be more interesting to see.
  • In terms of price, with ILC's
How did I come to that conclusion? Simple: using basic Physics. See below (not to scale):

942d1e75940a4fccbf064da5d35a96e5.jpg.png

The L16 will capture less total light in this example because of physical surface area. It will use sampling & stacking to improve outputted resolution (vs. a single conventional single sensor). And since it is more spread out, it will offer shallower DoF control, similar to a large aperture lens. For those who don't know, this outer portion of the lens (far from the center) is a fundamental contributor to shallow depth of field and why large aperture lenses result in shallower DoF. Tiny apertures don't have this 'varying distance' problem. Interestingly, the L16 does. :)

And this is exactly what the L16 is doing. For all intensive purposes, it's essentially sampling a larger format and then software reconstructing. Yes there are pros & cons to this approach; but there are also practical & physical challenges.
 
It sure opens the door to new technology. Especially if four small sensors are less expensive than one FF sensor.

But will any of it catch on in the market, or be adopted by the big makers? Remember the Lytro? That was some interesting technology too, but no one cared.
 
LOL. The entire world is literally binary for you: everything either existentially threatens your DSLR, or it doesn't. If it doesn't, it's OK.
No, there's room for plenty of different technologies. The problem is, we have these persistent dolts that espouse the next xyz whatever will replace SLRs, or replace ILCs or replace lenses or whatever, and they do so without understanding a bit about physics or the way ILCs and SLRs are actually used.
Totally agree with this. This L16 is a perfect example. I don't know the specs on the sensor size, but I'd guess it's something like a 1/2" sensor or smaller. If this was the size, then a 4-module camera would capture the same light as a 1" sensor. So that's the maximum level of shot noise performance we could expect--it's a physical constraint.
It doesn't have to match the same total light as the larger camera to get similar noise performance. Noise is random. If you take 5 simultaeous photos from the camera modules, and merge them, the differences are removed, ie the noise.

You can already do this in Photoshop with your single lens/sensor camera. People use the technique to remove (moving) people from landscape and real estate photos.
 
It sure opens the door to new technology. Especially if four small sensors are less expensive than one FF sensor.

But will any of it catch on in the market, or be adopted by the big makers? Remember the Lytro? That was some interesting technology too, but no one cared.
Yes, totally agree. It's cool stuff, and I hope they do well; but they do need to execute and fulfill a need for consumers.

It's an interesting balancing act, and a pocketable compact that offers decent low-light, several focal lengths, and great degrees of DoF control would be a great supplemental system (for me at least). We already have some of these: The Panny GM-series, the Sony RX100, etc.--but this camera & its approach strike a different balance. Do the RX100 or GM1/5 (or others) not give me a shallow enough DoF or compact-enough size w/lenses for these focal lengths? If so, this may be a great fit. Music festival or hiking? This may be the one. Who knows?

As a consumer, I'm good with competition and innovation.
 
LOL. The entire world is literally binary for you: everything either existentially threatens your DSLR, or it doesn't. If it doesn't, it's OK.
No, there's room for plenty of different technologies. The problem is, we have these persistent dolts that espouse the next xyz whatever will replace SLRs, or replace ILCs or replace lenses or whatever, and they do so without understanding a bit about physics or the way ILCs and SLRs are actually used.
Totally agree with this. This L16 is a perfect example. I don't know the specs on the sensor size, but I'd guess it's something like a 1/2" sensor or smaller. If this was the size, then a 4-module camera would capture the same light as a 1" sensor. So that's the maximum level of shot noise performance we could expect--it's a physical constraint.
It doesn't have to match the same total light as the larger camera to get similar noise performance. Noise is random. If you take 5 simultaeous photos from the camera modules, and merge them, the differences are removed, ie the noise.

You can already do this in Photoshop with your single lens/sensor camera. People use the technique to remove (moving) people from landscape and real estate photos.
Au contraire, mon frere. I am well aware of this technique and use it frequently. I last used it for a landscape shot just yesterday; or you can see my reply earlier in this thread for this technique when it comes to planetary photography.

This noise reduction was also referred to in the article I linked.

Yes, noise is random, and yes stacking helps; but it still exists--and does so in a non-linear fashion.

If you take 5 simultaneous photos from a 1/2" sensor and merge them, you will still not have the low light performance of a single full-frame sensor, which has something like 25x the surface area of a 1/2" sensor.

The math is important here. You cannot take, for example, two shots from a tiny 1/4" sensor and expect it to have the same noise performance as a single medium format shot.

And btw, this has been somewhat confirmed. See this article on the L16 that I linked to earlier. A quote from that article:
  • The company showed me sample photos comparing the image quality between an iPhone, Canon 5D Mark III and the Light L16. I have to admit, the sample image taken by the L16 looked pretty good with lots of details when zoomed in, but it also looked like it had a lot of image noise.
I was using 1/2" as an example, but I'd guess that this uses sensors even smaller than that, since it claims to have 150mm equivalent lenses in that thin body (even with angular light paths, the aperture would suffer).
 
Last edited:
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Funny thing is that I've been taking photographs for 62 years and have yet to use a "future camera." I use cameras that are available today or were available a few years ago and still produce great images. When the future becomes today then we'll see what's best for a particular use. Until then we're all stuck with today's or yesterday's real cameras.
 
LOL. The entire world is literally binary for you: everything either existentially threatens your DSLR, or it doesn't. If it doesn't, it's OK.
No, there's room for plenty of different technologies. The problem is, we have these persistent dolts that espouse the next xyz whatever will replace SLRs, or replace ILCs or replace lenses or whatever, and they do so without understanding a bit about physics or the way ILCs and SLRs are actually used.
Totally agree with this. This L16 is a perfect example. I don't know the specs on the sensor size, but I'd guess it's something like a 1/2" sensor or smaller. If this was the size, then a 4-module camera would capture the same light as a 1" sensor. So that's the maximum level of shot noise performance we could expect--it's a physical constraint.
It doesn't have to match the same total light as the larger camera to get similar noise performance. Noise is random. If you take 5 simultaeous photos from the camera modules, and merge them, the differences are removed, ie the noise.

You can already do this in Photoshop with your single lens/sensor camera. People use the technique to remove (moving) people from landscape and real estate photos.
Au contraire, mon frere. I am well aware of this technique and use it frequently. I last used it for a landscape shot just yesterday; or you can see my reply earlier in this thread for this technique when it comes to planetary photography.

This noise reduction was also referred to in the article I linked.

Yes, noise is random, and yes stacking helps; but it still exists--and does so in a non-linear fashion.

If you take 5 simultaneous photos from a 1/2" sensor and merge them, you will still not have the low light performance of a single full-frame sensor, which has something like 25x the surface area of a 1/2" sensor.

The math is important here. You cannot take, for example, two shots from a tiny 1/4" sensor and expect it to have the same noise performance as a single medium format shot.

And btw, this has been somewhat confirmed. See this article on the L16 that I linked to earlier. A quote from that article:
  • The company showed me sample photos comparing the image quality between an iPhone, Canon 5D Mark III and the Light L16. I have to admit, the sample image taken by the L16 looked pretty good with lots of details when zoomed in, but it also looked like it had a lot of image noise.
I was using 1/2" as an example, but I'd guess that this uses sensors even smaller than that, since it claims to have 150mm equivalent lenses in that thin body (even with angular light paths, the aperture would suffer).
Neither one of us knows if this particular prototype even uses this processing technique.

I think fixating on the surface area comparisons between array cameras and traditional cameras is a waste of time. They simply don't approach the problem in the same way. We know image merge NR already works, it's just a question of improving it.
 
Neither one of us knows if this particular prototype even uses this processing technique.

I think fixating on the surface area comparisons between array cameras and traditional cameras is a waste of time. They simply don't approach the problem in the same way. We know image merge NR already works, it's just a question of improving it.
I didn't say that it did (though I'd say it's a pretty safe bet, since it was confirmed to have merged all the images, which by definition would blend the noise).

You can't just merge images for resolution without merging for noise. That's not how it works.

There are some pretty basic physics involved here.
 
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Funny thing is that I've been taking photographs for 62 years and have yet to use a "future camera." I use cameras that are available today or were available a few years ago and still produce great images. When the future becomes today then we'll see what's best for a particular use. Until then we're all stuck with today's or yesterday's real cameras.
The idea that you could have a 12MP 30mm f/2.2 camera, in a 6mm thick phone computer, that had more processing power (and utility) than the then current laptops, would have seemed like Buck Rogers stuff in 2006. I'm holding that "future camera" in my hand right now.

So the future does come eventually. Some just vigorously deny its existence.
 
Last edited:
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Funny thing is that I've been taking photographs for 62 years and have yet to use a "future camera." I use cameras that are available today or were available a few years ago and still produce great images. When the future becomes today then we'll see what's best for a particular use. Until then we're all stuck with today's or yesterday's real cameras.
The idea that you could have a 12MP 30mm f/2.2 camera, in a 6mm thick phone computer, that had more processing power (and utility) than the then current laptops, would have seemed like Buck Rogers stuff in 2006. I'm holding that "future camera" in my hand right now.

So the future does come eventually. Some just vigorously deny its existence.
2006 was not some distant dark-ages past. We all knew technology would improve. In 2006, the iPhone was on the horizon, phones were already thin and powerful, and they all had pretty decent cameras. My 2006 phone had a camera that could rotate for selfies, and produced images that exceed current Instagram posting sizes (and exceed 1080 resolution).

We all knew they'd improve.

The future still can't overcome basic physics, as you seem to suppose it will do.

e75a099412654cf1bdc7dfbc0b424cb6.jpg.png

Yes, that's the 5D (released in 2005).
 
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Funny thing is that I've been taking photographs for 62 years and have yet to use a "future camera." I use cameras that are available today or were available a few years ago and still produce great images. When the future becomes today then we'll see what's best for a particular use. Until then we're all stuck with today's or yesterday's real cameras.
The idea that you could have a 12MP 30mm f/2.2 camera, in a 6mm thick phone computer, that had more processing power (and utility) than the then current laptops, would have seemed like Buck Rogers stuff in 2006. I'm holding that "future camera" in my hand right now.

So the future does come eventually. Some just vigorously deny its existence.
I see clearly that you don't get it. When it is not yet a product that you can buy then you cannot take it out for a spin and know whether it fills your needs. You have only descriptions that are often lacking in sufficient detail. It's fine to think about future camera technology but I'm taking photos with present technology and I'm not waiting for the future to arrive. In my experience the "future" often arrives with multiple changes in realizable technology.

My first laboratory computer cost $20K, had a whopping 64KBytes of memory, dual 8 inch floppies and no hard drive. I/O was a DECWriter. Waiting for "future" computer technologies was not an option. I needed to take data and analyse it immediately. The huge upgrade it first got a year or two later was an original DEC VT100 that cost $1900 and gave me a screen editor for writing programs and grant proposals. Networking? We strung serial cables here and there. Ethernet was a concept and patent but not a technology that was readily available until the "future."
 
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Slaps head. Why of course carrying around 12 sensors and lenses will always be cheaper than one. Why didn't I think of that, its so . . . obvious.
What do you mean "carrying around"? They're tiny. Please try to keep up.
They're $1500. More expensive than a superzoom and a nice iPhone, but without the features. Try to stay focused on the subject.

Oh, you do know that you can carry tiny objects, right? There is no size implied by the word.

car·ryˈkerē/verbverb: carry; 3rd person present: carries; past tense: carried; past participle: carried; gerund or present participle: carrying1. support and move (someone or something) from one place to another.
You're wrong about weight and cost. The iPhone 7 camera costs Apple $26 per unit. An array of ten would only begin to approach what Canon spends making a single 24-70 zoom. A dictionary won't help you here. Time to slither away?
Yes--larger sensors are not just linearly more expensive than smaller sensors. Several small sensors covering the same surface area as a single large sensor are typically much cheaper to produce. This is counter-intuitive, but true and not as preposterous as it sounds.

Here's an example from a Canon whitepaper. See page 11: "The Economics of Image Sensors", where they describe not only the shapes that can fit into a wafer; but also defect rate--which can be a significant proportion of sensor costs. That's a dated paper, but here's a quote from it:
  • "For now, appreciate that a full-frame sensor costs not three or four times, but ten, twenty or more times as much as an APS-C sensor."
That's despite the fact that the full-frame sensor is only 2.5x the area of the APS-C sensor.
 
Last edited:
Neither one of us knows if this particular prototype even uses this processing technique.

I think fixating on the surface area comparisons between array cameras and traditional cameras is a waste of time.
Well, that's science. Which I suspect is why you don't want to fixate on it.

Four F2.0 lens on a say, 10X crop factor sensor is equal to a F5.0 lens. Not terrible, but it gets worse, because the only way to can get a 100MM zoom is by cropping the 150 MM module, which gives you an F7.5 lens. Not horrible or anything, but worth $1500?

On the prototype, the photo stitching took a little while to work and froze. In the end, I didn't get to see how fast it was. When I asked Dr. Rajiv Laroia, Light's co-founder and Chief Technology Officer, how long it will take to generate a 52-megapixel image on the final product, he told me they're shooting for under a minute.

Who is going to wait a minute for one photo? And nothing the in scene can be moving. Sounds like it's a landscape camera marketed for people who only want to take selfies.

Doesn't Oly's pixel shift work better? Cheaper, faster, and has the exact same advantage. I mean, you could pixel shift an existing cell phone w/o much cost or size impact--if anyone really wanted a 50 MB cell phone image. And then you could zoom in as well.
They simply don't approach the problem in the same way. We know image merge NR already works, it's just a question of improving it.
This is math and science which work the same for all cameras. The trade offs are very well understood. Noise versus resolution versus DOF versus lens size versus exposure time.

You can't simply "improve" image stacking, it's already theoretically perfect. That's like saying we know a tiny sensor works, now all have to do it make it act like a bigger one.

I don't understand why people are fixated on a camera that doesn't have one test image with no theoretical justification. Is there some wish fulfillment fantasy going on?

The next few months should be fun, once the camera is out and people test it. I do wish them luck. If they could get it down to $200 it might make a nice toy to replace a point and shoot. I am kind of excited, just to see what it does and doesn't do. Science at work minus the smoke and mirrors.
 
Old article yes, but this is still obviously the future. Ten years from now I think it's safe to say that few if any new cameras will have one sensor and one lens.
Funny thing is that I've been taking photographs for 62 years and have yet to use a "future camera." I use cameras that are available today or were available a few years ago and still produce great images. When the future becomes today then we'll see what's best for a particular use. Until then we're all stuck with today's or yesterday's real cameras.
The idea that you could have a 12MP 30mm f/2.2 camera, in a 6mm thick phone computer, that had more processing power (and utility) than the then current laptops, would have seemed like Buck Rogers stuff in 2006. I'm holding that "future camera" in my hand right now.

So the future does come eventually. Some just vigorously deny its existence.
I see clearly that you don't get it. When it is not yet a product that you can buy then you cannot take it out for a spin and know whether it fills your needs. You have only descriptions that are often lacking in sufficient detail. It's fine to think about future camera technology but I'm taking photos with present technology and I'm not waiting for the future to arrive. In my experience the "future" often arrives with multiple changes in realizable technology.

My first laboratory computer cost $20K, had a whopping 64KBytes of memory, dual 8 inch floppies and no hard drive. I/O was a DECWriter. Waiting for "future" computer technologies was not an option. I needed to take data and analyse it immediately. The huge upgrade it first got a year or two later was an original DEC VT100 that cost $1900 and gave me a screen editor for writing programs and grant proposals. Networking? We strung serial cables here and there. Ethernet was a concept and patent but not a technology that was readily available until the "future."
I really don't know what your point is. We take photos today with what we have on hand, and we speculate and bicker about what might come in the future in threads like this. Nobody is "waiting".
 

Keyboard shortcuts

Back
Top