Previous news story    Next news story

Rambus unveils 'Binary Pixel' sensor tech for expanded dynamic range

By dpreview staff on Feb 27, 2013 at 12:51 GMT

US technology company Rambus has unveiled 'Binary Pixel' sensor technology, promising greatly expanded dynamic range for the small sensors used in devices such as smartphones. Current image sensors are unable to record light above a specific saturation point, which results in clipped highlights. Binary Pixel technology gets around this by recording when a pixel has received a certain amount of light, then resetting it and in effect restarting the exposure. The result is significantly expanded dynamic range from a single-shot exposure. The company has demonstrated the technology using a low resolution (128 x 128 pixel) sensor, and says it can easily be incorporated into CMOS sensors using current manufacturing methods.

Aside from the 'temporal oversampling' described above, Binary Pixel technology employs a couple of further innovations. It uses Binary Operation, sensing photons using discrete thresholds which the company says is similar to the human eye for better sensitivity across gamut of dark to bright. It also employs Spatial Oversampling, meaning the individual pixels are sub-divided to capture more data and improve dynamic range. The technology isn't restricted to phone sensors, and in principle should work equally well for all sensor sizes.

Rambus lists the key advantages of Binary Pixel sensors as follows:

Ultra-High Dynamic Range
• Optimized at the pixel level for DSLR-quality dynamic range in mobile and consumer cameras 

Single-Shot HDR Photos & Videos
• Operates in a single exposure period to capture HDR images real-time with no post processing

Improved Low-Light Sensitivity
• Spatial and temporal oversampling reduces noise and graininess

Works with Current Mobile Platform
• Designed to integrate with current SoCs, be manufactured using current CMOS technology, and fit in a comparable form-factor, cost and power envelope

Press release:

Rambus Unveils Binary Pixel Technology For Dramatically Improved Image Quality in Mobile Devices

 Image comparison illustrating the theoretical benefits of the Binary Pixel Imager 

Breakthrough technology Provides Single-Shot High Dynamic Range and Improved Low-Light Sensitivity in a Single Exposure

SUNNYVALE, CALIFORNIA AND BARCELONA, SPAIN – February 25, 2013 – Rambus Inc. (NASDAQ: RMBS), the innovative technology solutions company that brings invention to market, today unveiled breakthrough binary pixel technology that dramatically improves the quality of photos taken from mobile devices. The Rambus Binary Pixel technology includes image sensor and image processing architectures with single-shot high dynamic range (HDR) and improved low-light sensitivity for better videos and photos in any lighting condition.

“Today’s compact mainstream sensors are only able to capture a fraction of what the human eye can see,” said Dr. Martin Scott, chief technology officer at Rambus. “Our breakthrough binary pixel technology enables a tremendous performance improvement for compact imagers capable of ultra high-quality photos and videos from mobile devices.”

As improvements are made in resolution and responsiveness, more and more consumers are using the camera functionality on their smart phone as the primary method for taking photos and capturing memories. However, high contrast scenes typical in daily life, such as bright landscapes, sunset portraits, and scenes with both sunlight and shadow, are difficult to capture with today’s compact mobile sensors - the range of bright and dark details in these scenes simply exceeds the limited dynamic range of mainstream CMOS imagers.

This binary pixel technology is optimized at the pixel level to sense light similar to the human eye while maintaining comparable form factor, cost and power of today’s mobile and consumer imagers. The results are professional-quality images and videos from mobile devices that capture the full gamut of details in dark and bright intensities.

Benefits of binary pixel technology:

  • Improved image quality optimized at the pixel level
  • Single-shot HDR photo and video capture operates at high-speed frame-rates
  • Improved signal-to-noise performance in low-light conditions
  • Silicon-proven technology for mobile form factors
  • Easily integratable into existing SoC architectures
  • Compatible with current CMOS image sensor process technology

The Rambus binary pixel has been demonstrated in a proof-of-concept test-chip and the technology is currently available for integration into future mobile and consumer image sensors. For additional information visit www.rambus.com/binarypixel

Comments

Total comments: 193
12
Steen Bay
By Steen Bay (Feb 27, 2013)

Increasing the saturation capasity by resetting the pixels is in practice the same as lowering the sensors base ISO. A 1/2.3" sensor could have the same IQ at ISO 3 as a FF camera has at ISO 100, but the downside is that shooting at ISO 3 most often will require a rather long/slow shutterspeed, so it'll only work with static scenes/subjects.

0 upvotes
Greg Lovern
By Greg Lovern (Feb 27, 2013)

How do you figure that resetting a pixel during exposure, to allow collecting more photons without blowing out to white, is the same as lowering the base ISO and so would require much longer shutter speeds?

Compare two otherwise identical cameras taking the same shot with the same settings; one camera has this technology and the other does not.

In the camera without this technology, some highlights are blown to white. In the camera with this technology, with the same shutter speed and other settings, more photons are collected in the highlights, each photosite collecting a slightly different number of photons than the next, so the highlights are not blown to white.

Where is the longer shutter speed in that scenario?

0 upvotes
Steen Bay
By Steen Bay (Feb 27, 2013)

If the highlights are blown, then the shutterspeed was to long/slow or the ISO to high. The solution is to use a faster shutterspeed or a lower ISO.

0 upvotes
bobbarber
By bobbarber (Feb 27, 2013)

I'm not sure you're right about that Steen Bay. Lowering ISO or increasing shutter speed doesn't increase dynamic range. You save your highlights, but the shadows become black or noisy, especially with a small sensor. The difference with this technique, I'm assuming, is that you give sufficient exposure to the low end.

I've been wondering for a long time why somebody hasn't just done this. I don't understand the physics of sensors, but it seems obvious that you should know when a given pixel blows out to white and be able to do something with that information.

1 upvote
MarkInSF
By MarkInSF (Feb 27, 2013)

It's technically complex. This definitely allows longer exposures that would capture additional shadow detail. Currently that's not possible without blown highlights, but this gives the choice of a longer exposure and more shadow detail or a shorter one with less. Highlights would be fine either way, but the longer exposure would have increased dynamic range. ISO becomes a very different issue than it is currently.

0 upvotes
tompabes2
By tompabes2 (Feb 27, 2013)

Great idea! HDR with one shot! The naysayers have already gathered and are posting at full speed... ;)

0 upvotes
Nikolaï
By Nikolaï (Feb 27, 2013)

typo:

"Low-Light Sensitivity in a Single Explosure"... this thing will be da bomb when it comes out... sorry couldn't help myself.

0 upvotes
joyclick
By joyclick (Feb 27, 2013)

Let us give it to them.OK folks.Let us have some small sensor cameras in our hands and 'see it ' for ourselves.Whenever that be.Sooner the better Rambus.

0 upvotes
Jono2012
By Jono2012 (Feb 27, 2013)

Another great idea in theory.....

3 upvotes
Rachotilko
By Rachotilko (Feb 27, 2013)

I won't criticize it since I don't understand the details. However, I do have some superficial hypothetical opinions regarding the idea.

1, From the short description it seems as if it actually choses to overexpose and takes care of the overexposed pixels via their resetting mechanism.

2, It sounds as a smarter version of Fujifilm EXR mechanism. Similarly as in the case of EXR sensor, one group of the pixels is used for capturing the shadows, the other group is used for capturing the highlights. But in the case of the EXR, membership of a pixel to either group is predetermined (the well known EXR pixel layout), while in case of the Rambus' BinaryPixel technology the membership is decided based on the actual exposure process taking place.

3, Practice has shown that EXR approach works in expanding the DR, but some sensor area is actually wasted. The Rambus technology essentially means you'll get the benefits of EXR without the infamous EXR drop in resolution.

2 upvotes
NinjaSocks
By NinjaSocks (Feb 27, 2013)

Looks promising even if the name of the company that created it sounds like a porn site.

2 upvotes
Osiris30
By Osiris30 (Feb 27, 2013)

Rambus is a company that started out designing memory chips.. they were adopted by Intel for the very first Pentium 4s. They were expense, hard to make and had tech licensing costs that were way too high. The rest of the industry shunned Rambus memory and the company nearly went broke...

1 upvote
OniMirage
By OniMirage (Feb 27, 2013)

More specifically they were makers of extremely fast bandwidth memory that yes was expensive. It was used on high end systems that required bandwidth from cached processes vs low latency ram. The PS2 and PS3 used rambus technology. GDDR is based on the theory of high bandwidth memory used in high end graphics cards.

2 upvotes
mr_ewok
By mr_ewok (Feb 27, 2013)

ehm, 180px width preview images? srsly?

0 upvotes
tkbslc
By tkbslc (Feb 27, 2013)

Proof of concept. Sensor fab is expensive.

2 upvotes
AngryCorgi3G
By AngryCorgi3G (Feb 27, 2013)

Me like. But is this the same Rambus that came up with RDRAM?? These guys are known patent trolls. It's possible that nothing substantial may come of this.

Comment edited 2 times, last edit 2 minutes after posting
10 upvotes
KrisPix
By KrisPix (Feb 27, 2013)

Unfortunately this is the same Rambus ... one of the few companies I would never work for.

2 upvotes
Osiris30
By Osiris30 (Feb 27, 2013)

Ding ding ding, we have a winner. Even if they do try and license it, the cost will be huge, just like the RDRAM debacle.

2 upvotes
tkbslc
By tkbslc (Feb 27, 2013)

That RAM thing was a decade ago, guys.

0 upvotes
AngryCorgi3G
By AngryCorgi3G (Feb 27, 2013)

It was long ago, but Rambus garnered the "Patent Troll" reputation since.

http://en.wikipedia.org/wiki/Rambus

2 upvotes
Kim Letkeman
By Kim Letkeman (Feb 27, 2013)

A brilliant idea ... ultimately far more promising than, say, Fuji's EXR technology because it enables much better exposures in hyper contrasty situations (expose for the shadows could become the norm.) Further, it does not use kinky-weird demosaicing algorithms and thus promises much cleaner images than specialized filter patterns etc. A great first step.

1 upvote
s_grins
By s_grins (Feb 27, 2013)

This is a very promising first step that leads to new frontiers.
I have a time to wait for further developments

0 upvotes
AV Janus
By AV Janus (Feb 27, 2013)

That picture looks familiar...
is that just a simulation or did they actually take that shot?

0 upvotes
neo_nights
By neo_nights (Feb 27, 2013)

Well, it says "Image comparison illustrating the theoretical benefits of the Binary Pixel Imager ". So I think it's just a simulation.

0 upvotes
mgrum
By mgrum (Feb 27, 2013)

It's just rough a mock up for people who don't know what dynamic range is. A point missed by at least a third of the comments here.

3 upvotes
ageha
By ageha (Feb 27, 2013)

Of course it wasn't taken by the sensor, they even said that. How can a 128x128 pixel sensor take this shot?

Comment edited 54 seconds after posting
0 upvotes
GURL
By GURL (Feb 27, 2013)

Besides cost and megapixels availability, the main point is whether or not usual low dynamic images people are used to get when using a phone are possible. If the answer is yes this should help to solve the "flash is not powerful enough" problem.

0 upvotes
HowaboutRAW
By HowaboutRAW (Feb 27, 2013)

And since this sensor is not actually in any cell phone cameras, the cell phone/smart phone camera makers could improve the images from existing gear by allowing the capture of raw data. (No new unperfected sensor needed, just a software update for the phone.)

0 upvotes
hc44
By hc44 (Feb 27, 2013)

Hey critics, they've said their prototype sensor is 128 x 128. The sample above is bigger than that and is described as a theoretical comparison.

So that ain't even it!

2 upvotes
Nikonworks
By Nikonworks (Feb 27, 2013)

Of course the colors are muted, notice the deep shadows they are in.

Everyday I deal with people sticking their cell phones trying to get my shot.

This tecnology will make matters worse, for me. But should enable those cell phone users to get much better shots than they are getting now.

For me Light is what keeps me ahead of the cell phone users.
Now they will edge closer in results.

Instagram and other sites better gear up for a large increase in uploads.

More exchanged photos can only help this world of ours.

0 upvotes
expressivecanvas
By expressivecanvas (Feb 27, 2013)

This looks horrendous... of course, the "Current" imager example looks horrendous too but on the opposite extreme. This is worthy of publishing in DPReview? Gimme a break!

4 upvotes
Devendra
By Devendra (Feb 27, 2013)

lots of things look horrendous when starting.

0 upvotes
Aibenq
By Aibenq (Feb 27, 2013)

maybe, those current mobile imager, actualy the REAL result of our imager. remember, after our sensor capture the image, our phone process the RAW file before they show final result which we see it as JPEG files.

so that CURRENT MOBILE IMAGER sample are un-processed raw image.

0 upvotes
HowaboutRAW
By HowaboutRAW (Feb 27, 2013)

Aibenq--

Um, well current cell phone cameras toss out a lot of raw data, if that's what you mean by "process".

I'd prefer to process my own data. (True for any digital camera, yes including the Fuji XTrans sensored cameras.)

Now with this still proof of concept sensor, we don't have access to the raw data so we can't really draw conclusions about what's in the raw files from this demonstration unit.

0 upvotes
mgrum
By mgrum (Feb 27, 2013)

If you read the article you'll have noticed that the images posted are an "illustration" not the actual result (the prototype is only 128x128 pixels).

5 upvotes
Jan Privat
By Jan Privat (Feb 27, 2013)

We have now come to the point, where cellphone camera innovations push digital photography. LOL. But okay, lezz go!

2 upvotes
neo_nights
By neo_nights (Feb 27, 2013)

It's simply because smartphone cameras are more popular than 'regular' P&S. Afterall, all the new tech we know (or at least most of them) are tested before on small sensored cameras and then go to a more advanced level.

0 upvotes
ageha
By ageha (Feb 27, 2013)

That happened long time ago and makes total sense.

0 upvotes
Deleted pending purge
By Deleted pending purge (Feb 27, 2013)

If it gets to be user-adjustable and visible in setting up the shot (e.g., not effective only during exposure), from 0 to the level shown in the samples above, I think it may have a significal potential. The way it was presented, it would require the same amount of PP as all other high-contrasting images...

0 upvotes
anthonyGR
By anthonyGR (Feb 27, 2013)

Guys stop trying to imaging what this will do to your DSLR photography. This is aimed at cellphone cameras. It's for teenagers taking shots of their buddies LOLing and wanting to capture some of the background too. The colors being muted, or badly tonemapped is irrelevant here.

0 upvotes
abrunete
By abrunete (Feb 27, 2013)

It doesn't really look HDR-ish to me, indeed you get the shadows but it looks kinda washed out in the example shown here.
But in principle, it's promising...

Comment edited 2 times, last edit 2 minutes after posting
0 upvotes
madsector
By madsector (Feb 27, 2013)

And what exactly is the difference to Fujis EXR technology?

0 upvotes
AEndrs
By AEndrs (Feb 27, 2013)

Have you actually read the article? (And the source?)

0 upvotes
mgrum
By mgrum (Feb 27, 2013)

EXR wont introduce weird motion artifacts, but is limited in how far the DR can be extended. This approach can potentially yield unlimited DR if the pixels can be reset many times.

0 upvotes
dimsgr
By dimsgr (Feb 27, 2013)

exr is about different (e.g. half of sensor) pixels having different exposure times, e.g. half of them having 15sec and rest half 1/60sec, which results in capturing bright as well as dark objects, but obviously introducing synching problems, especially of relatively fast moving objects,
greets

0 upvotes
Kuv
By Kuv (Feb 27, 2013)

I don't see this working when exposure needs to build up over time and may have motion present (i.e. long exposure landscape shots at bay).

0 upvotes
steve_hoge
By steve_hoge (Feb 27, 2013)

Remember, all pixels will be gathering light during the same exposure period. You won't have some shutting off before or after others, so there shouldn't be any temporal effects.

0 upvotes
mpgxsvcd
By mpgxsvcd (Feb 27, 2013)

That is really cool if you don't mind the HDR look. If they can tone it down a little and get it closer to the Dynamic Range our eye sees that would be good.

This could be really cool for video as well.

Comment edited 1 minute after posting
1 upvote
falconeyes
By falconeyes (Feb 27, 2013)

I described this technology in various forum and blog posts a long time ago. It is a straightforward way to improve current sensors. Esp. if one wants to sell memory ...

In particular, I described a way to improve the current column-parallel DAC technology incorprated in Sony sensors. Rather than recording the clipping and resetting it (which induces tonal errors), I proposed to do continous DAC operation and add-up results digitally in pixel registers to be read out digitally over a memory interface. This adds a lot of memory to a sensor chip.

So, Rambus just described the obvious. A 128^2 px prototype is not relevant. Sony doing a sensor with embedded pixel registers would be.

Comment edited 2 times, last edit 2 minutes after posting
4 upvotes
LensBeginner
By LensBeginner (Feb 27, 2013)

Interesting...
That could be a way of stopping the MP race for good: I reckon a 36MP sensor would need a humongus amount of memory.
I think the hardware will quite easily reach the point where large-scale production becomes no longer feasible/practical/marketable at the moment, thus effectively blocking the production (at least for the time being) of cameras sporting that (hypothetical) technology you're talking about (not the one in the article) with higher-than-practical-MP sensors.
Is that correct?

Comment edited 39 seconds after posting
1 upvote
dimsgr
By dimsgr (Feb 27, 2013)

hi
continuous DAC operation and what Rambus (hypothetically) does, looks the same to me, with the most difficult part being the electronics requirement and the noise they propably introduce, hence touted as "high DR" and not "great low light"... anyway, I am saying hypothetically, since Rambus has a record of chasing patents, and the demonstration presented here makes me think that they have nothing in hand but rather a paper launch
greets

3 upvotes
steve_hoge
By steve_hoge (Feb 27, 2013)

I don't really see tonal errors (I assume you mean discontinuities) being introduced unless the clip detection/reset time becomes an appreciable fraction of the exposure time. Presumably this will be somewhere in the microsecond range at the most, so not a problem for anything but the most high-speed exposures.

The RAMBUS scheme will still need pixel registers whose width (ie, # of guard bits) equals the number of full stops of overexposure you want to accomodate.

Comment edited 2 times, last edit 3 minutes after posting
1 upvote
falconeyes
By falconeyes (Feb 27, 2013)

@dimsgr I agree that Rambus is probably chasing patents. Which is why I mentioned here and in public that I described a similiar technology beforehand (prior art).

I agree that Rambus' approach needs less memory. But it is less capable than what I described too.

3 upvotes
dubstylz
By dubstylz (Feb 27, 2013)

Add RAW capture to a device with this sensor and i think it could be pretty awesome.

Comment edited 3 minutes after posting
1 upvote
hc44
By hc44 (Feb 27, 2013)

At the technology site Slashdot a running gag used to be "does it run Linux?", asked of everything that had the slightest hint of a processor inside it.

I suppose the equivalent over here is "does it capture RAW?".

New camera bag released: does it capture RAW?

1 upvote
Paul Guba
By Paul Guba (Feb 27, 2013)

This is pretty amazing tech. Not sure it will be something most on this forum will appreciate as the skill level is somewhat higher than the average smart phone photographer. For that lowest common denominator photographer I think it will be great. It will allow them to capture more images with less difficulty. That is a formula for success that goes back to George Eastman,

0 upvotes
noegd
By noegd (Feb 27, 2013)

Interesting. This looks like an easy and low cost way of improving dynamic range in small sensors.

Hopefully RAMBUS will be more successful than when they provided the proprietary and expensive RDRAM technology for the first Intel Pentium 4 motherboards.

1 upvote
arqomx
By arqomx (Feb 27, 2013)

the colour kind of muted..

0 upvotes
sojo76
By sojo76 (Feb 27, 2013)

at shadow..

0 upvotes
LensBeginner
By LensBeginner (Feb 27, 2013)

Yeah, a little washed out.
Potentially interesting, though.
One thing is for sure, even with the economic downturn the research in mobile photo technology is flourishing as never before.
That is an exciting thing per se.

0 upvotes
mgrum
By mgrum (Feb 27, 2013)

If you read the article you'll have noticed that the images posted are an "illustration" not the actual result (the prototype is only 128x128 pixels).

3 upvotes
Photomonkey
By Photomonkey (Feb 27, 2013)

HDR images will be flat in order to fit all the tones into a narrow gamut display. Two things HDR users do to add the snap back is to oversaturate the image (we have all seen that done to death), and the other is to actually labor over the image to select those tones that create the impression of snap by selectively losing certain tones.

2 upvotes
Total comments: 193
12