Back in December, a rumor went around that Samsung’s Galaxy S11+ smartphone will use a new 108-megapixel sensor with ‘Nonacell’ technology. Not much is known about the sensor still, but it seems the rumor holds a little more clout now that Samsung has officially trademarked the ‘Nonacell’ nomenclature.
As first discovered by Dutch technology site LetsGoDigital, Samsung has filed a trademark in the United Kingdom for the ‘Nonacell’ name. As addressed in our initial coverage, ‘nona’ is Latin for ‘nine.’ As such, it’s expected that Samsung’s ‘Nonacell’ technology will use a three-by-three color filter array to merge nine pixels into a single, larger one. If combined with the much-anticipated 108MP ISOCELL Bright HMX sensor, the ‘Nonacell’ technology would yield a 12MP still.
Plenty of details remain about the rumored S11+ smartphone from Samsung, but we shouldn’t have to wait much longer with the series expected to launch sometime this month.
It is indeed fabulous, and the best camera phone photos by far. Could shoot RAW and 4K... Back in 2015. Plagued by dim screen and no upgrade path for Android OS updates. I had to finally stop using it late 2018. Would love to see an updated version.
If you want to call it anything other than that, then for consistency, you would also divide most camera sensors by 4 due to their 4-pixel color filter arrays. So a 48MP camera becomes a 12MP cameras, while 24MP cameras become 6MP.
I don't see the point to have 108MP to process one way or the other. Anyway the lens resolution will be a low-pass filter.
At least if they could create a 12MP image that is of good quality in daylight, it would be a tremendous improvement over what smartphone actually delivers.
It's not just delivering 12 megapixel output, that's just the best you can get from it. They're doing on-sensor cropping for digital "zoom" without upscaling, so any time other other using the full sensor size, it's doing some kind of rematrixing with 3x3 cells, which can't be that pretty. They're also supporting UltraHD-8K video, which is using about a 1.5x sensor crop and similar rematrixed deBayering.
I'm all for the tricks, my modified Gcam version does 36 exposure merges that are a perfect match with LR Android, I think it's superior to even the original in Pixels.
@LensBeginner - that is actually the thing. With the micro lenses and especially since they moved all the circuitry beneath the photo pixels (with the stacked tech) there gap surface have moved to an insignificant value. 10 years ago, this gaps would be a significant percentage, so you would basically loose total sensor surface, thus increasing noise per total image. With this gap virtually eliminated, there is no disadvantage. And you can apply some tricks by having the pixel mask further divided, or even applying different to subpixels amplifications to increase DR.
Having different subpixel amplifications is, AFAIK, something that's always been touted as feasible, but we've never had confirmation that it's actually used in a real-world phone.
Don't get me wrong, I like the results from my 48 MP sensor + GCam, but it's mostly due to the GCam HDR+ mode which, as I understand it, is diachronic and not synchronic.
Fair point. However i was more into the "more pixels mean more noise" part. And since the back illuminated and stacked tech, we pretty much dismissed that to the point where the only relevant part is the sensor size and not the number of pixels. DPR had an article for some years comparing an A7S with an A7R (not sure which gen) and showed that from down-sampling the much higher res resulted in basically the same noise performance per image at every ISO, and of course, especially at low ISOs the higher res sensor had a great advantage in detail resolving.
Samsung said that. 108MP that outputs 12MP photos. 3x3 instead of 2x2 used by quadbayer.
Sony created quadbayer tech. For example , when you put 4 pixels (same colour) together you could have two using +0.7 EV while the other two using -0.7 EV.
Then I suppose Samsung want with this sensor do something for example 3 pixels using +1.0 EV, 3 pixels using 0 EV and 3 pixels using - 1.0 EV for each 9 pixels grouped for higher DR ..
Ok, so all you know is what everybody else knows. Actually, I was looking for information rather than speculation about how Samsung groups their pixels.
That they make 9 adjacent pixels the same color is just ONE option, you know ...
The amount of pixels per microlens varies from 10-36. A Lytro Illum used ~14 Pixels per microlens.
That would mean with this 108MP sensor we would already be at 7.8 megapixels of final resolution. And that with perfect depth information. Something multicam / dual pixel / TOF and other technologies have struggled with for the last years. (Although they have gotten a lot better over the last years)
I think that would be an incredible leap forward for smartphone photography. Maybe even set a light field camera besides a "normal" camera on the back of a phone and enjoy the benefits of both.
[And although it is much more likely on a smartphone a slightly larger 1" sensor with the same pixel density would offer 191MP or 13.6MP at 14pixel per microlens. That would make an incredible 14MP compact camera with real DOF adjustment]
Can you share your source about the geometry of micro-lenses?
Note also that even if you are right — it is still not Lytro. Lytro was a pure snake oil: it creates a grid of “hyper-focally focused” subpixels. The result is a blurry (since hyperfocal) picture (equivalent to about 0.5MPix) which can be blurred-yet-more-depending-on-distance at wish — with a very simple RELIABLE algorithm.
With the design you propose: it is, essentially lytro-behind-autofocus! You get a possibility of a (relatively) sharp picture, which may be either • blurred at wish (but not much) by a simple RELIABLE algo, or • depth calculated (by a complicated algo), then blurred (by a complicated algo).
Still, if your subpixel can only “see” a small part of entry pupil, the diffraction is going to be magnified correspondingly. :-(
@ilza Yes, the refocus part was snake oil. But that's not the part that is interesting. Having a proper depth map is what is interesting, to reliably create "Fake bokeh". For a proper light field capture that is able to refocus the image, you would need far more subpixels per microlens. The suggested 10-36 pixel per microlens are only enough for a depth map
Frankly speaking, your reference looks like a collection of platitudes. (Maybe it would seem different if I saw it 15 years ago…)
They are missing the main component of snake oil: that the diffraction is determined NOT by the TOTAL size of the lens, but only by the part CONTRIBUTING to a given photosite…
@ilza You still don't understand it. If you want a "true" light field, yes you are correct. But then you would use far far far fewer microlenses, and would position them far away from the sensor.
What suggested, however, is a "dual-pixel" on steroids variant. Similar to what is described in the link I provided in the second half. You only get good resolution at the "real focal plane" meaning you still need to focus your shots pre capture. But you get a very accurate depth map in return. (You can also, in theory, refocus the image like the description in the second third. You will, however, lose a lot of resolution that way)
Depth map with sharp picture - very much possible and a major advancement. And this is what counts for me. In the focus plane, you get the resolution that matches 100% with what a sensor with the same amount of MP as you have microlenses would produce. Just you can use the detailed depth map for proper "fake bokeh" And since we now have so many pixels we can get the number of microlenses close to that of current acceptable phone MP numbers. So we are not really loosing anything
I think that what you call “a depth map” I call “a complicated and unreliable algorithm”. ;-)
Essentially, to me it seems that what you want is already possible with DPAF — and do not you agree that more than 2 pixels under a microlens would have only a marginal advantage?
AFAIU, a vertical-only PDAF can often beat a cross-type SLR-autofocus; for me, it is a datapoint suggesting that even TPAF (tetra-sub-pixel AF) would not give you much more flexibility than DPAF… And you want many more vintage points (inside the entry pupil) — I wonder how would they help?!
Samsung should step up their game on sensor tech, they are the only one who has the chance to challenge or even surpass Sony due to their resourcing power.
C'mon, Samsung's current "micro"LED(because it's not really micro, but that's irrelevant for practical purposes) has far superior pixel density and brightness and is much more efficient than Sony's, just look at the power consumption and the thickness of the modules which mostly consists of the heat sink.
Well, both had quad bayer forever. So it's not unlikely that Sony has a nonacell sensor in the pipeline. They usually have pretty matching product portfolios.
Isocell, however, refers to a specific light barrier in the color filter array. So I guess you are confusing technologies here a bit
An APS-C size Nonacell 108 would be nice. A 12MP merged size would be more than enough. Actually all phone tricks should be transferred to "normal size" photography. Sony has the knowledge to go there.
I was thinking a 2x2 coverage for the CFA with the binning done 3x3(needs a complex demosiacing method but should provide more color resolution), then there remains the option of simply doing 2x2 binning which means 27MP bayer.
Interesting concept. With nine input pixels for every output pixel, it would be pretty cool if each row/column of pixels was tuned to a different level of gain for a native HDR capture. For example, row 0: -2 EV, row 1: 0 EV, row 2: +2 EV.
I imagine it wouldn't even be necessary to perform a mosaic algorithm if the CFA is configured in a striped array since each color would be sampled for every output pixel; relying instead on a chroma/luma averaged values for each 3x3 pixel array. But oversampling with a conventional Bayer CFA may yield more accurate/sharper results. They probably tested both methods when developing the sensor.
Sony and Samsung already do that on their quad Bayer cells. So it is highly likely they will do it here again.
They can switch between low and high gain (modern BSI sensors only have two gain stages) for individual sub pixels. OR can even expose different sub-pixels at different exposure times for "real" single-frame HDR
The primary reason for variable gain is to deal with the shortcomings of 12 and 14 bit sensors. If full well exceeds bit depth then reading out all electrons results in arithmetic rounding/truncation (e.g. 1-4e- = 1 ADU). This becomes insurmountable read noise when reading out the full well (low ISO). To reduce read noise requires reading out only a portion of full well (high ISO).
There are now a few CMOS with 16 bit A/D and as a result they are really "ISO invariant". Multi-gain pixel clusters are a hack that will become irrelevant as CMOS bit depth improves.
Note also that smaller pixels have shallower well depth and when decreasing well depth drops below 12/14 bit capacity then variable gain becomes unnecessary. So Bayer-clusters of very small pixels may be ISO invariant. The resulting "color pixels" from the clusters will have good pseudo-well depth and somewhat low noise (noise over all summed pixels).
@AstroStun: your (irrecoverably) mix up read noise and quantization errors. (They ARE related, but not the way you think they are.)
The truth is that with the full well and read noise of currently available MF and FF sensors (and even more for APS-C), the benefits of more bits per pixel saturate somewhere at 13.5 bits. (This means that 14-bit readout is better than 13-bit, but only by an almost unmeasurable amount.)
Simulations (and the actual measurements by Jim Kasson, IIRC) show that there is absolutely no reason to have more than 14 bit readout.
DPR, home of know-it-alls vying to one-up each other! <g>
“quantization error” is a form of noise. Information theory defines noise as uncertainty. Study basic information theory and photonics.
“more bits per pixel” is a an incomplete notion without considering pixel capacity and other noises. If digitization is incapable of counting those electrons then the result is noisier than comprehensive digitization.
“absolutely no reason to have more than 14 bit” – “absolutely”? You really felt the need to assert that? This isn’t religion.
Thank you a lot for your illiterate opinions! You are welcome to continue — but I recommend that you first learn enough to write your first simulation of the physics of a photon sensor.
Note that I never said that 16-bit output is NOT POSSIBLE. A lot of (maybe even practically every!) Sony sensor is capable of this — however, this readout mode is not used by cameras. Because it is USELESS.
Beyond a certain bit count, more bits would slow down the readout without any benefits. The specific number depends on the full well and the readout noise. See my previous post for the answer for contemporary sensors.
A popular opinion 15-20 years go was that there is absolutely no reason to have more than 12 MegaPix and anything more is a waste.
However, the mega-pixel wars continue and as pixels shrink, full well decreases correspondingly, needing fewer bits to accurately measure. So it may not be long before 16 bit is actually excessive for some cameras. That may have already happened for phone cams.
Why so nasty? I guess if you are overwhelmed by the issues and unyielding in naive opinions then personal insults will somehow salvage your reputation?
@AstroStan In any case, readout times are largely irrelevant for astrophotography, whereas bit depth is extremely imoortant in discriminating between levels of luminosity and in avoiding star saturation. I would love to be able to afford one of those ZWO 16-bit cameras.
Right. I come from the perspective of astro-imaging where these are often nontrivial issues though they not may be significant for terrestrial photography.
12 or 14 bit works very well in conjunction with variable gain for most terrestrial (and many astro) purposes.. But looking forward it seems likely there will be a place for 16 bit DSLR/ICL cameras because there are advantages. Current disadvantages (e.g. download time) can be expected to decrease as processors and sensors continually evolve.
@timothya: > “whereas bit depth is extremely imoortant”
Please educate me why it is so EXTREMELY IMPORTANT. Suppose you have a 16-bit readout, and it says that a particular photosite generated −1±3 electrons. How are you going to use this information?
Why the corresponding info from a 14-bit readout would not satisfy you?
"Extremely" may be a bit of an overstatement (though you are similarly inclined, as witnessed by ALL CAPS). But keep reading for the answer:
"in discriminating between levels of luminosity and in avoiding star saturation".
In astro-imaging it is often important to detect very dim objects without blowing out brighter objects. In many situations, read noise determines limiting magnitude so it beneficial to have the lowest read noise possible. And it is desirable to avoid saturating brighter objects. An optimal method to achieve this is to set gain to access the entire full well without hobbling readout with increased noise. But if the full well is significantly more than 16ke- then that is mathematically impossible for 14 bit A/D.
CCDs use 16 bit (and higher) because they typically have larger well depth. If the well depth is only 2^12 x noise level, which is fairly typical for CMOS, there is no point in having a >12 bit ADC.
A very popular 12 bit CMOS astro-cam is ASI1600, which has base noise = 1.2e- with full well = 20ke-. So 12 bits is inadequate to handle the full range without increased noise. It is necessary to set a particular gain to accomodate the target, conditions and intent (similar to DSLR ISO settings).
The new 16bit ASI6200 camera (which will be popular) has base noise = 1.2e- with full well = 50ke-...
@kkoba: your calculation is wrong, up to 2bits. • All that is important is to have ADEQUATE dithering. • If the noise of the input is not enough to provide this dithering, for a quality quantization one needs to add noise to the input. • Adding noise decreases S/N ratio: we do not want this!
As simulations show, the Gaussian dithering with σ of 0.4 of the step of quantization is already more-or-less perfect; smaller σ is also possible without significant loss of information. (“Real sensors”’s noise is non-Gaussian; the high percentage of outliers AFAIK improves dithering .)
Another issue is the “admissible S/N ratio”. With typical workflows, the signal which is below 2σ is going to be ignored anyway. For example, with read noise=2e₋, the “interesting” parts of the image start at exposures about 6.5e₋, with noise 3.2e₋. So this would successfully dither with steps of quantization up 8e₋.
CONCLUSION: with noise of 2e₋ the 14-bit readout supports full well up to 128Ke₋.
I don't want to disturb your highly technical and theoretical discussion. But why don't you just look at a real-life example?
The Fuji GFX100 has a 16bit readout, with 3.76µm sized pixels. It brings nearly zero benefits over the 14bit readout, and more importantly in this discussion the 16bit readout, is noisier than the 14bit readout with the second gain stage applied.
This means there is still a place for gain stages in modern Sony/Samsung sensors, which both use the Aptina dual-stage model. In those stages, it makes no difference, but switching between the two does. That's also why quad bayer sensors currently support low/high gain set differently for subpixels.
All of that is however really irrelevant to the dynamic range effects you get with two different exposure times for different subpixels. Which is also the main focus of quad/nonacell HDR tech.
It is a crazy world. Start with a 108MP sensor. Combine every 9 pixels to reduce to 12MP. Shot a picture. Crop it to 3MP for a 2x zoom. Use AI to increase resolution back to 12MP.
..then post it on IG at 1080x1080 (2MP total) and watch it on a 5.5" screen!
Basically your everyday 2008 iPhone 3G with it's 2MP camera can deliver all the resolution needs of 99% of the photos taken and viewed every day, awesome.
Zoom zoom zoom wrote: "..then post it on IG at 1080x1080 (2MP total) and watch it on a 5.5" screen!" And don't even dare ask her/him to rotate to landscape.
Yield (which enormously affects cost) makes scaling up very difficult. For example, Sony's stacked sensor tech is in lots of moderate price phones, but only in their very top cameras. The difficulty of production goes up exponentially with sensor size.
Additionally large sensors are significantly slower because of signal travel
Topaz Labs' flagship app uses AI algorithms to make some complex image corrections really, really easy. But is there enough here to justify its rather steep price?
Above $2500 cameras tend to become increasingly specialized, making it difficult to select a 'best' option. We case our eye over the options costing more than $2500 but less than $4000, to find the best all-rounder.
There are a lot of photo/video cameras that have found a role as B-cameras on professional film productions or even A-cameras for amateur and independent productions. We've combed through the options and selected our two favorite cameras in this class.
What’s the best camera for around $2000? These capable cameras should be solid and well-built, have both the speed and focus to capture fast action and offer professional-level image quality. In this buying guide we’ve rounded up all the current interchangeable lens cameras costing around $2000 and recommended the best.
Family moments are precious and sometimes you want to capture that time spent with loved ones or friends in better quality than your phone can manage. We've selected a group of cameras that are easy to keep with you, and that can adapt to take photos wherever and whenever something memorable happens.
What's the best camera for shooting sports and action? Fast continuous shooting, reliable autofocus and great battery life are just three of the most important factors. In this buying guide we've rounded-up several great cameras for shooting sports and action, and recommended the best.
A pro chimes in with his long-term impressions of DJI's Mavic 3. While there were ups and downs, filmmaker José Fransisco Salgado found that in his use of the drone, firmware updates have made it better with every passing month.
Landscape photography has a very different set of requirements from other types of photography. We pick the best options at three different price ranges.
AI is here to stay, so we must prepare ourselves for its many consequences. We can use AI to make our lives easier, but it's also possible to use AI technology for more nefarious purposes, such as making stealing photos a simple one-click endeavor.
This DIY project uses an Adafruit board and $40 worth of other components to create a light meter and metadata capture device for any film photography camera.
Scientists at the Green Bank Observatory in West Virginia have used a transmitter with 'less power than a microwave' to produce the highest resolution images of the moon ever captured from Earth.
The tiny cameras, which weigh just 1.4g, fit inside the padding of a driver's helmet, offering viewers at home an eye-level perspective as F1 cars race through the corners of the world's most exciting race tracks. In 2023, all drivers will be required to wear the cameras.
The new ultrafast prime for Nikon Z-mount cameras is a re-worked version of Cosina's existing Voigtländer 50mm F1 Aspherical lens for Leica M-mount cameras.
There are plenty of hybrid cameras on the market, but often a user needs to choose between photo- or video-centric models in terms of features. Jason Hendardy explains why he would want to see shutter angle and 32-bit float audio as added features in cameras that highlight both photo and video functionalities.
SkyFi's new Earth Observation service is now fully operational, allowing users to order custom high-resolution satellite imagery of any location on Earth using a network of more than 80 satellites.
In some parts of the world, winter brings picturesque icy and snowy scenes. However, your drone's performance will be compromised in cold weather. Here are some tips for performing safe flights during the chilliest time of the year.
The winners of the Ocean Art Photo Competition 2022 have been announced, showcasing incredible sea-neries (see what we did there?) from around the globe.
Venus Optics has announced a quartet of new anamorphic cine lenses for Super35 cameras, the Proteus 2x series. The 2x anamorphic lenses promise ease of use, accessibility and high-end performance for enthusiast and professional video applications.
We've shot the new Fujinon XF 56mm F1.2R WR lens against the original 56mm F1.2R, to check whether we should switch the lens we use for our studio test scene or maintain consistency.
Nature photographer Erez Marom continues his series about landscape composition by discussing the multifaceted role played by the sky in a landscape image.
The NONS SL660 is an Instax Square instant camera with an interchangeable lens design. It's made of CNC-milled aluminum alloy, has an SLR-style viewfinder, and retails for a $600. We've gone hands-on to see what it's like to shoot with.
Recently, DJI made Waypoints available for their Mavic 3 series of drones, bringing a formerly high-end feature to the masses. We'll look at what this flight mode is and why you should use it.
Astrophotographer Bray Falls was asked to help verify the discovery of the Andromeda Oxygen arc. He describes his process for verification, the equipment he used and where astronomers should point their telescopes next.
OM Digital Solutions has released firmware updates for the following cameras to add compatibility support for its new M.Zuiko Digital ED 90mm F3.5 Macro IS PRO lens: OM-D E-M1 Mark II, E-M1 Mark III, E-M5 Mark III, E-M1X, and OM-5.
Micro Four Thirds has 'size benefits, and a shooting experience that can’t be matched by a smartphone,' says the director of Panasonic's camera business, as we talked about the system's future, the role of video, the adoption of phase detection and the role his dog played in the development of the S5 II.
Today's modern cameras are armed with sophisticated autofocusing systems. They can focus anywhere in the frame, track multiple subjects, and switch on the fly. But what good are these advanced tools if you can't see where the camera is even focusing? It's time for the autofocus box to upgrade from its single-color status.
Topaz Labs' flagship app uses AI algorithms to make some complex image corrections really, really easy. But is there enough here to justify its rather steep price?
The Panasonic Lumix DC-S5 II is a powerful mid-range full-frame stills and video mirrorless camera that introduces on-sensor phase detection, 6K 'open gate' video, LUTs for still mode and more. We put the camera through its paces during a hands-on trial run in the real world.
The new FE Sony 20-70mm F4 G has an extremely versatile zoom range, but how do the pictures look? Check out these full resolution 60 megapixel captures!
Sony has confirmed it’s developing a high-end 300mm F2.8 telephoto lens for its E-mount camera systems. The lens will be a part of the company’s high-end G Master lens lineup.
Apple's new high-end M2 Pro and M2 Max chipsets are here and being debuted in the company's 14” and 16” MacBook Pro models. Meanwhile, its Mac Mini is now available with the company's M2 and M2 Pro chipsets.
Comments