Mister Roboto: Cross your fingers that they will put sensor bigger than a pinhead otherwise this will be another setback for iPhone. You can buy a $50 camera and it is far better than what iPhones have nowadays.
Yes, sensor size matters. But a bigger sensor has a wider view, lousy for many purposes, unless the camera is ALSO a lot thicker.
That's why P&S cameras, despite advantages in image quality, aren't selling: they're not as pocketable and not enough better quality.
SLR—higher quality—cameras have ALSO come down in sales, but they still take ENOUGH better pix, especially in dim light or high-speed sports shots, that pros keep buying them.
No reason to worry, I'd say: image quality is important to Apple. Whether they continue the pretty-much-standard sensor size, or have some magic multi-sensor or multi-snap software, nobody expects a regression.
RichRMA: You can't bend the laws of physics. Sensors can be made any which way, but the small ones will never, ever match the larger ones. Back illuminated, electron-multiplying, it doesn't matter. They are modest enhancements that produce a slightly better product, but a 1/2.3" sensor will never be a m4/3, APS, etc.
But that's the point of this advance…by putting circuitry behind the sensor, rather than taking up area that should be capturing photons, you get an all-else-equal bigger sensor.
Then, there's the constant improvement in the physics of converting photons that impinge on a chip, into electrons. And optical improvements that get more photons onto the chip.
Lots of improvements here, and ahead. Hardware. Haven't even touched smart software.
Lhermine: Very interesting article ! Thanks very much DPR.
As many people here, I was wondering if the number of photons can be so low that shot noise would be significant.
Based on the so-called "sunny 16 rule", I've found that for a proper exposition on a 24 MPix FF sensor, around 50,000 photons should hit each pixel. This lead to a shot noise with an amplitude of 1.4 %.
This kind of noise amplitude should to small to be noticed. However, in darker conditions, you may have to increase the ISO speed let's say for instance 6400. It means that you have fewer photons. The corresponding shot noise amplitude will be around 3.5 %. We reach up to 14 % for ISO 100,000!
And that's the bad news: 1,000,000 ISO speed will never be as good as 100 ISO whatever the quality of the sensor because of the shot noise.
(for those who are interseted in the computation, please mail me, you may point out some mistakes ;-) )
I can't confirm/deny your calculations but I *WILL* note that it assumes 100% efficiency of capturing incoming photons. That's way too high.
I'm not the expert to describe the percentage that are captured, but in many sensors, a good fraction of photons fall on support electronics that are insensitive, and lost. More importantly, the fraction that hits the sensitive parts may fail to be recorded for a variety of reasons.
Using a SWAG (scientific, wild-assed guess) of 5% detection efficiency, your 1.4% noise rises to 6.3% (the square root of 1/.05). This doesn't seem unlikely for a well-exposed scene.
Note that RAW conversion techniques essentially ALWAYS employ noise-reduction. For example, a green pixel might take the average of the 4 nearest green pixels; this would smush fine details, but reduce the noise by half (again, the square root). In areas with lower signal, the RAW conversion might spread the average further, hoping to create a more acceptable image.
Think of the Very Large Array of multiple radio telescopes that are now state-of-the-art. Separate sensors covering a wider radius, so able—with some not-actually-horrible computations—to get the equivalent of a big sensor/lens w/o the depth that others here aptly note.
Combining the separate images is a bit trickier than for radio telescopes due to the much more challenging depth of field issues. Note that clever as Lytros was, the images never had the fine resolution that people increasingly expect from even snapshot cameras.
makofoto: The iPhone6 optics are already pretty flat ... and impressive
@makofoto wrote, “iPhone6 optics are already pretty flat… and impressive”
That they are… but the lens DOES protrude from the back of the case, in the most un-Apple-like feature I've seen.
A shallower lens would allow Apple, or ANY smartphone maker, to have a good, not-just-fisheye lens AND a decently large sensor for better quality images. (Getting a “normal” focal length now would require an even thicker lens or a much smaller sensor and higher resultant noise.)
This one is worth watching. There are one or two things danced around (“…greatly improves its efficiency” suggests “not as much light goes where it should”) but that could come.
Fredy Ross: I guess I will stick with galaxy note. Very convenient to have a stylus.
Yes, very important in shooting pictures. Highly relevant to DPR readers.
I don't understand the claim that the “Dual Disk Redundancy” feature saved your bacon.
Any (common) RAID setup continues working if one disk dies or is removed. I recently migrated from one NAS to another by buying 4 higher-capacity disks, replacing the old ones one at a time. True to claims, the new disks held the identical info (and the old ones went into a second (identical brand) NAS box, giving me two copies of my files as of that time.
I use that NAS as my Time Machine target. A week ago, my laptop's boot disk (a 240GB SSD) failed suddenly while I was traveling. As it was only 2 years old, the vendor replaced it gratis (even cross-shipping), and a couple hours after I got home, the Apple software had re-created the failed laptop disk. (The work I did while traveling stayed on the spinner, which was easily NOT restored, as Apple made it easy to separately restore partitions.)
RAID seems a valuable feature in smart backup/storage. You get 3/4 of the nominal space—a better value.
NK777: It is joke?
You didn't read the patent, did you?
A hell of a lot more work went into this mechanism than your cheap shot joke.
nawknai: I truly cannot tell if this is a joke.
It sounds like an April Fools thing, but then I see a link to the Patent Office, and unless the US Patent office, which I'm sure is a barrel of laughs, has put up an April Fools prank, then this may be the dumbest patent I've seen in a long time.
You may think the magnet idea is better, but that's for buyers, not the patent office, to decide. This approach is interesting… novel… in the best tradition of mechanical patents for innovation. And if nobody wants the extra precision or security of a similar mount to what real cameras insist on, why then it has cost nobody a penny.
LensBeginner: Haha, was about to fall for it, then looked at the date...
You know what would be REAL nice? Is if some of you people who claim this idea is obvious or not innovative, could describe how it actually works.
Go ahead and show us that you have the mechanical engineering chops to understand it, rather than act as Boffo the Clown.
Dr_Jon: Looks a lot like the Moment Kickstarter project (which is lenses and a Bayonet mount adapter):https://www.kickstarter.com/projects/584288471/moment-amazing-lenses-for-mobile-photography
Let's be clear: the patentable part of the invention is for a bayonet mechanism that locks, but ALSO pops loose in response to sufficient pressure.
Right now, the Moment page shows a nice bayonet type design, but NOT one that even locks in place (still iterating) OR pops loose in a controlled manner.
Yes, the *idea* of a detachable lens is the same. Patents are never granted for ideas, only the specific implementations.
Infared: Rut Roah!!!!!!!!!....but I love "real" cameras soooooo much...this could be the beginning of the end!!!!!!!!!!!!!!!!!!!!!!!!!!
Yes, because a “normal” lens for APS-C is something like 25mm. That means the focal point (pinhole) has to be an inch in front of the sensor, and you need more glass in front of that, likely another inch. You now have a cellphone that's almost as deep as it is wide.
Justtimthen: The lens and sensor should be a two in one module that just clips on to the top side.
This clever bayonet isn't necessary for the QX type attachments, that don't actually need ANY physical connection.
Regards the sensor, the iPhone's is pretty good…for its tiny size. Bigger and you need more depth, a non-starter, I'd guess.
Houseqatz: xlnt, this will be a step in the right direction, provided they deliver a decent lens selection at launch, and at least a 2/3" sensor.
I guess there's a possibility, if the camera were only to use the center part of that (relatively) huge sensor.
Smartphone cameras are already wide angle due to their (lack of) thickness. The focal length of the iPhone is 4.1mm, 30mm equivalent. A bigger sensor would deliver almost a fisheye image unless the lens protruded WAY out of the body.
As this mechanism might allow, but only with the understanding that your big sensor could only be utilized by the small fraction of phone buyers who used the add-on lens. Unlikely, I'd guess.
Peiasdf: "....It's also designed to release the lens on a hard impact without a twist in order to minimize potential damage to the lens and device if it was dropped...."
Name another lens mount that have this capability. Samsung fanboys... I don't know if they can read or not.
“Literally all of them.”
I had the misfortune of a bicycling accident a while back, and landed atop my Canon DSLR. Probably cracked a rib. (It hurt quite a bit but I was far from a hospital & knew the standard treatment is to ride it out.)
I had my prime lens attached, fortunately capped and even a lens hood reversed, covering much of the body. So despite the accident, *it* survived my weight crashing down atop it.
Looking it over, I can't imagine how you'd expect the mounting mechanism to give way gracefully… the only way would've been for the lens to have been destroyed. That is ABSOLUTELY NOT how this patent reads.
I know you want to knock Apple's inventiveness, but this thing was granted by an examiner who actually knows something about mechanical engineering and saw (a) a unique mechanism, and (b) a description that matched the design.
Raul: Have you any idea on how many patent's subject-matter never reaches the light of day? What a waste of time on your side!
Count me among the doubters. But with a tweak or two—maybe, recess the mount so the back is flat without the add-on lens—there's a chance it'd be useful.
Most DP readers are comfortable carrying around spare lenses; a decent telephoto might be well under an inch long—hardly any issue at all. Note that almost all smartphone cameras have a VERY wide angle view—the iPhone's is about 30mm equivalent, longer than some but quite short for most of us—so normal, portrait and honest-to-God telephotos can't fit *in* the tiny phone without shrinking the sensor, and the quality, dramatically.
The breakaway feature seems paramount: no way I'd want a bit of mishandling to mangle the back of my phone.
So, no, not a ridiculous concept at all. Whether it sees light of day is nonetheless a question.
Maybe on a pixel-by-pixel basis, that theater shot is a bit grainy. But how many thousand pixels wide is it? If this were downsampled to any other camera's capability, it'd be incredible. I'm quite impressed by a shot in theater-level light.
My old Canon G series had a stiching feature that allowed multiple photos in a rectangular grid, resulting (on my laptop) in a huge increase in pixels for a shot (assuming the stitching was perfect, which it often wasn't). If this camera were to guide the shooter in scanning multiple rows of panos, and especially if one were to get a bit of a telephoto lens, this could make for some awesome hi-res shots.
WaltFrench: I was a bit surprised by the evaluation of these photos. First, the 100% view that included the trees, looked quite artificial on the One, although it clearly had more detail. The trees looked like an undistinguished smush of green in the Sammy view, whereas the iPhone & Nokia seemed much more natural. In other words, totally different take on the shots.
Repositioning the HTC and Nokia images caused me to wonder why the Nokia & HTC cameras so suddenly took on different characteristics. Disconcerting, and needless.
Finally, let me vote that the very different pixel counts and focal lengths made it much more difficult to interpret the differences due to the 100% and same position rules. In real life, a photog wouldn't post or print a much bigger photo just because of more pixels, and would move closer or farther to get the desired view (when possible). I'd much prefer to see the same part of the subject to get a sense of how a closeup looks, even if it meant not having 1:1 images here.
Odd... the specific shot I was commenting on, a park of trees in front of a gray bldg with red trim, seems to have disappeared from being the second group of photos on the first page.
Anyhow, perhaps you'd comment on the closeup of your model's eye (“Portrait; Sunlight”). I thought the HTC closeup made it look like she had serious skin troubles; it looked like the Samsung sharpened her pores to make them look like shaving stubble.
I can't quite tell how visible that'd be in say, a 5X7 or 4X6, but with just a bit of cropping, I'd guess pretty obviously.
Do these uglifications really show in the images?
I was a bit surprised by the evaluation of these photos. First, the 100% view that included the trees, looked quite artificial on the One, although it clearly had more detail. The trees looked like an undistinguished smush of green in the Sammy view, whereas the iPhone & Nokia seemed much more natural. In other words, totally different take on the shots.
rocklobster: As another poster said, contrast is out-of-wack as wide dynamic range picture has not had proper tone-curve adjustment. And, anyway, does revealing the detail in the shadows (or the highlights) really make it a better picture? Is it really what the eye sees? The old HDR arguments may resurface.
Good simple solution to the 'problem' though. Wish Fujifilm had thought of this.
I likewise think the right-hand image looks awful. But you can never have too much data; if the technique manages to supply more bits of data — and I wonder how well it'll deal with nonlinearity around its reset point and/or moving subjects — then software will fix it.