Previous news story    Next news story

Rambus unveils 'Binary Pixel' sensor tech for expanded dynamic range

By dpreview staff on Feb 27, 2013 at 12:51 GMT

US technology company Rambus has unveiled 'Binary Pixel' sensor technology, promising greatly expanded dynamic range for the small sensors used in devices such as smartphones. Current image sensors are unable to record light above a specific saturation point, which results in clipped highlights. Binary Pixel technology gets around this by recording when a pixel has received a certain amount of light, then resetting it and in effect restarting the exposure. The result is significantly expanded dynamic range from a single-shot exposure. The company has demonstrated the technology using a low resolution (128 x 128 pixel) sensor, and says it can easily be incorporated into CMOS sensors using current manufacturing methods.

Aside from the 'temporal oversampling' described above, Binary Pixel technology employs a couple of further innovations. It uses Binary Operation, sensing photons using discrete thresholds which the company says is similar to the human eye for better sensitivity across gamut of dark to bright. It also employs Spatial Oversampling, meaning the individual pixels are sub-divided to capture more data and improve dynamic range. The technology isn't restricted to phone sensors, and in principle should work equally well for all sensor sizes.

Rambus lists the key advantages of Binary Pixel sensors as follows:

Ultra-High Dynamic Range
• Optimized at the pixel level for DSLR-quality dynamic range in mobile and consumer cameras 

Single-Shot HDR Photos & Videos
• Operates in a single exposure period to capture HDR images real-time with no post processing

Improved Low-Light Sensitivity
• Spatial and temporal oversampling reduces noise and graininess

Works with Current Mobile Platform
• Designed to integrate with current SoCs, be manufactured using current CMOS technology, and fit in a comparable form-factor, cost and power envelope

Press release:

Rambus Unveils Binary Pixel Technology For Dramatically Improved Image Quality in Mobile Devices

 Image comparison illustrating the theoretical benefits of the Binary Pixel Imager 

Breakthrough technology Provides Single-Shot High Dynamic Range and Improved Low-Light Sensitivity in a Single Exposure

SUNNYVALE, CALIFORNIA AND BARCELONA, SPAIN – February 25, 2013 – Rambus Inc. (NASDAQ: RMBS), the innovative technology solutions company that brings invention to market, today unveiled breakthrough binary pixel technology that dramatically improves the quality of photos taken from mobile devices. The Rambus Binary Pixel technology includes image sensor and image processing architectures with single-shot high dynamic range (HDR) and improved low-light sensitivity for better videos and photos in any lighting condition.

“Today’s compact mainstream sensors are only able to capture a fraction of what the human eye can see,” said Dr. Martin Scott, chief technology officer at Rambus. “Our breakthrough binary pixel technology enables a tremendous performance improvement for compact imagers capable of ultra high-quality photos and videos from mobile devices.”

As improvements are made in resolution and responsiveness, more and more consumers are using the camera functionality on their smart phone as the primary method for taking photos and capturing memories. However, high contrast scenes typical in daily life, such as bright landscapes, sunset portraits, and scenes with both sunlight and shadow, are difficult to capture with today’s compact mobile sensors - the range of bright and dark details in these scenes simply exceeds the limited dynamic range of mainstream CMOS imagers.

This binary pixel technology is optimized at the pixel level to sense light similar to the human eye while maintaining comparable form factor, cost and power of today’s mobile and consumer imagers. The results are professional-quality images and videos from mobile devices that capture the full gamut of details in dark and bright intensities.

Benefits of binary pixel technology:

  • Improved image quality optimized at the pixel level
  • Single-shot HDR photo and video capture operates at high-speed frame-rates
  • Improved signal-to-noise performance in low-light conditions
  • Silicon-proven technology for mobile form factors
  • Easily integratable into existing SoC architectures
  • Compatible with current CMOS image sensor process technology

The Rambus binary pixel has been demonstrated in a proof-of-concept test-chip and the technology is currently available for integration into future mobile and consumer image sensors. For additional information visit www.rambus.com/binarypixel

Comments

Total comments: 193
12
zycamaniac
By zycamaniac (Mar 18, 2013)

About time Rambus does something useful.

2 upvotes
mutatron
By mutatron (Mar 2, 2013)

Pretty cool! Reminds me of the space-based ion detectors I used to work on at the Center for Space Sciences. Those were a pain to decipher with their 6 or 7 bit words. I would imagine this technology would be easier, but software for that will make the transition from Bayer filters to Fuji's X-Trans look like a walk in the park.

What's needed now is a printer that can deal with the extra dynamic range. I've long wondered when someone would be able to create a sensor/printer technology that would be comparable to b&w of yore, something Ansel Adams might have used for his prints.

0 upvotes
Turbguy1
By Turbguy1 (Mar 1, 2013)

I predicted a time-based sensor months ago...

http://forums.dpreview.com/forums/post/42448738

0 upvotes
CNY_AP
By CNY_AP (Mar 1, 2013)

I wonder exactly how it is done...I have been thinking for eons if only they could read (perhaps with a comparator) each pixel and then set gain or something in order to increase the SNR. I surmise they reset the accumulating charge once it reaches a certain level, and increase a counter each time they do the reset. Perhaps that requires not a lot of circuitry nowadays (size-wise).

0 upvotes
Kpatel55
By Kpatel55 (Mar 1, 2013)

We need that in DSLR Sony, Cannon and Nikon should use this technology in their cameras.

0 upvotes
bipixel
By bipixel (Mar 1, 2013)

I think this is very useful, i hope it will be used in smartphone as soon as possible.

0 upvotes
bipixel
By bipixel (Mar 1, 2013)

The following url explaint this kind of technology more earlly.

http://skydoc.googlecode.com/svn/trunk/Reference/Recovering%20high%20dynamic%20range%20radiance%20maps%20from%20photographs.pdf

http://rr.epfl.ch/12/1/Meylan_TIP2006.pdf

0 upvotes
rocklobster
By rocklobster (Feb 28, 2013)

Just think how this could be applied to speed/red-light camera for revealing registration plates or even facial features for identifying drivers of vehicles. Many US states will be up in arms about this prospect.

Cheers

0 upvotes
Jack Simpson
By Jack Simpson (Feb 28, 2013)

Shadow/Highlight in PS revisited? ;)

0 upvotes
Wilmark
By Wilmark (Feb 28, 2013)

You cannot trust a company like Rambus. They have a terrible history. They are a patent troll of the highest order. Genuine companies should steer clear of them. They join technology consortiums, with the guise of cooperation and knowledge sharing, then they run off and patent these shared ideas or incorporate it in their patents. Then they mount a battery of lawsuits with companies that would rather give in than expose them. The worst was when Intel partnered with them in the famous RDRAM fiasco, after RDRAM failed they sued almost all the memory manufacturers, after joining JDEC and stealing their ideas and patenting them. This company needs to be put out to pasture. I wont doubt that any technology they are showing off is fluff only to get corporation.

15 upvotes
Alberto Tanikawa
By Alberto Tanikawa (Mar 4, 2013)

Ditto. No need to do business with a troll company.

0 upvotes
ProfHankD
By ProfHankD (Feb 28, 2013)

This has some of the properties of a new sensor technology I've been working on in my research for several years, and I think it's a great general approach.

However, they don't give quite enough info to distinguish what they're doing from the sensors that have been made primarily for the auto industry to use in rear-view cameras. You may have noticed that some of those have far greater dynamic range than normal cameras, and this type of sampling logic is the reason. I don't know who's making which versions now; a little company making such sensors was acquired by Cypress Semiconductor some years ago and they were producing these sensors, but I believe Cypress stopped doing that in the past year or two....

0 upvotes
HubertChen
By HubertChen (Feb 28, 2013)

The idea is great for sure. But how practical in terms of space is this on a die? If you want to increase the dynamic range by 4 stops you would need an additional 4 bit counter plus the reset logic. Say that would be 30 transistors. Would be interesting to know how much space 30 transistors take in relation to the photocell to understand if this approach has merit with current chip manufacturing process for CMOS sensors or if only with further shrunk transistor sizes the proportion will become meaningful in the future. More dynamic range on mobile phones is sure a good thing.

0 upvotes
HubertChen
By HubertChen (Feb 28, 2013)

On another note it is worth noting that RAMBUS makes its income by filing patents on ideas which borderline common knowledge and then sue chip makers to pay licenses to RAMBUS, making common products more expensive. Lets hope this is not the intention here that this patent is creating the foundation to make sensor more expensive in the future.

1 upvote
ProfHankD
By ProfHankD (Feb 28, 2013)

For what it's worth, the nanocontroller architecture I've been working on for a decade has less logic than a 4-bit counter (see aggregate.org/KYARCH/20070914/), but one needs more than 4 bits local to each pixel to do this right and memory cells are somewhat problematic. My reference sensel size is 5um, and between 100-300 transistors fit under that. With the smaller sensel pitch they seem to be targeting, I doubt they're really doing much more than simple threshold output & reset with per-sensel logic.

1 upvote
huyzer
By huyzer (Feb 28, 2013)

GREAT! I've always hated the lack of DR in digital, other than the S5 Pro that I have. Thank you! :D

0 upvotes
BaconBit
By BaconBit (Feb 28, 2013)

I think this is a great idea. It's not about making the end result look like the picture on the right all the time. It's about getting the most, and cleanest, information from the scene to be able to work with. It's about having the flexibility to get great detail from both shadows and highlights where previous sensors would either clip or present lots of noise. In this way, it's a powerful direction to go in.

0 upvotes
Grev
By Grev (Feb 28, 2013)

SuperCCD? :P

0 upvotes
Mescalamba
By Mescalamba (Feb 28, 2013)

Well, Super CCD are two pixels merged into one. This is just one pixel (cell). Tho its not that different and I think maybe Super CCD was better idea. Unfortunately Fuji being Fuji did that bit too early and f*cked it up by various quirks and problems..

Btw. I have S5 Pro right now, while it has amazing DR its still just 6 mpix camera.. and not exactly sharpest 6 mpix too.

0 upvotes
Nikonb
By Nikonb (Feb 28, 2013)

look at this
http://www.design-reuse.com/articles/7411/variable-integration-time-image-sensor-for-wide-dynamic-range.html

1 upvote
Eric Fossum
By Eric Fossum (Feb 28, 2013)

check the last reference on that paper.
There are many ways to do HDR. Using multiple exposure times is an old idea and we did one of the first sensors using this method nearly 20 years ago. The Rambus binary pixel approach goes well beyond this.

0 upvotes
Nikonb
By Nikonb (Feb 28, 2013)

Yes it is an "old" idea but it does not use different exposure times but the partial emptying of photosites approaching too fast saturation, and this in a single exposure time.
this technique was not practically feasible at this time because it required an internal circuit a bit too large for the fill factor.

Comment edited 3 minutes after posting
0 upvotes
CyberAngel
By CyberAngel (Feb 28, 2013)

The illustration does not come out of the proof of concept 128x128 censor

A new censor is needed with ultra-fast reset time
and 1 bit memory/pixel for one-time over exposure reset information (binary)

Since Nigel_L already had this idea earlier:

http://forums.dpreview.com/forums/post/18895569

this could be used against the patent troll

0 upvotes
B1ackhat
By B1ackhat (Feb 28, 2013)

I think the bigger news is that Rambus is still in business.

5 upvotes
RichRMA
By RichRMA (Feb 28, 2013)

I find many of these wide DR images are lifeless, flat and lacking in dynamics. If blacking out some shadows results in a more dynamic, engaging image, why go HDR?
As for Rambus, things aren't that pretty right now.

http://hothardware.com/News/Third-Most-Important-Rambus-Patent-Invalidated-Company-Share-Price-Craters/

4 upvotes
hc44
By hc44 (Feb 28, 2013)

I'd played around with the contrast and brightness and made it look better. Contrast especially.

0 upvotes
Mescalamba
By Mescalamba (Feb 28, 2013)

Well, DR is about "options" .. when you have burned highlights and non-liftable shadows (due banding noise), then you are pretty much "done". Obviously if you know how to shoot its not an issue. But this allows bit of extra tweaking.

Tho to be really usable, 16-bit output is needed or it will be really "flat" (14-bit is ok-ish, but more is better in this case).

0 upvotes
Neimo
By Neimo (Mar 1, 2013)

OLED has tremendous dynamic range and more displays with it are becoming available. If it becomes popular enough, some folks will use it to reproduce high contrast scenes better. It'll be nice to have that data in photos we take today or next year.

0 upvotes
Spectro
By Spectro (Feb 28, 2013)

the image to the left looks like a canon, the one to the right looks like a nikon...oop wrong topic.

I am all for getting the highest DR like film. We can add contrast to and saturation if we wanted in post production. Just give us the most raw data possible. Engineers keep it up. Rambus is those guys that I remember suing other memory makers on ram.

Comment edited 3 minutes after posting
0 upvotes
Karl Gnter Wnsch
By Karl Gnter Wnsch (Mar 12, 2013)

Film has less DR than current sensors!

0 upvotes
rocklobster
By rocklobster (Feb 28, 2013)

As another poster said, contrast is out-of-wack as wide dynamic range picture has not had proper tone-curve adjustment. And, anyway, does revealing the detail in the shadows (or the highlights) really make it a better picture? Is it really what the eye sees? The old HDR arguments may resurface.

Good simple solution to the 'problem' though. Wish Fujifilm had thought of this.

Cheers

Comment edited 53 seconds after posting
0 upvotes
WaltFrench
By WaltFrench (Mar 1, 2013)

I likewise think the right-hand image looks awful. But you can never have too much data; if the technique manages to supply more bits of data — and I wonder how well it'll deal with nonlinearity around its reset point and/or moving subjects — then software will fix it.

1 upvote
pavluha
By pavluha (Feb 28, 2013)

This is a neat idea. Blown highlights are a bigger problem than shadow noise because they are absolutely unrecoverable, while noisy shadows can be smoothed/binned post exposure. Or even during exposure! In such a hypothetic sensor, highlight pixels could reset multiple times during exposure to improve headroom, while at the same time shadow pixels could be binned to improve shadow S/N ratio (at the expense of resolution) all at once.

1 upvote
vladimir vanek
By vladimir vanek (Feb 27, 2013)

contrast+ :)))

1 upvote
LarryK
By LarryK (Feb 27, 2013)

Rambus is not so much a technology company, but rather a patent lawsuit mill.

Best to walk away from anything they develop, it's likely just something their attorneys will use to sue people.

14 upvotes
OneGuy
By OneGuy (Feb 27, 2013)

Ownership protection is the cornerstone of the Western civilization.

If what you say is true I say Go Rambus!

1 upvote
tlinn
By tlinn (Feb 27, 2013)

That all sounds good until someone sues you because they own a patent on all objects with square corners.

10 upvotes
Tee1up
By Tee1up (Feb 27, 2013)

Lawyers and judges are spectacularly ignorant when it comes to technology and it gets worse when parades of engineers are brought into court rooms to try and demonstrate proof of orignal ownership and design.

I'm with larryK, these guys had an ethical bypass at birth.

11 upvotes
raincoat
By raincoat (Feb 27, 2013)

Specifically Rambus held back desktop ram technology for 5 years.

11 upvotes
LarryK
By LarryK (Feb 27, 2013)

Don't confuse "Ownership Protection" with extortion.

These guys are the "Scientologists" of Technology.

Last time I checked "Western Civilization" wasn't all that impressive anyway.

13 upvotes
Timmbits
By Timmbits (Feb 28, 2013)

they can only make money if they license it.
if they sit on a patent and refuse to licence it at reasonable market cost, they can be sued and forced to share.
thus are the patent laws.

1 upvote
Peter Kwok
By Peter Kwok (Feb 28, 2013)

Rambus does not spread technology by licensing it. They look for technology similar to theirs and extort $$$ from innovators. They call this licensing, just like mobs selling "protection".

4 upvotes
LarryK
By LarryK (Feb 28, 2013)

Even the mob has some scruples.

0 upvotes
Najinsky
By Najinsky (Feb 28, 2013)

Dat's-a because we respect-a da family.

0 upvotes
OneGuy
By OneGuy (Feb 28, 2013)

Just one week ago -- and on this site:

'Nikon and Microsoft sign patent deal over Android-based camera'

Maybe MS got some Rambus ideas...

0 upvotes
forpetessake
By forpetessake (Feb 27, 2013)

Going to the logical end, a single pixel can work in a binary mode (detecting light or darkness) similar to dithered images of printers. When they create pixels with 50nm pitch then it can be possible to have image quality close to today's sensors.

But pretty soon, they will hit the law of diminishing returns, the quantum noise is determined by the total light collected by the surface of the sensor. For example, even ideal (noiseless) micro 4/3 sensor will not be able to achieve the performance of today's (not ideal) FF sensor.

1 upvote
Eric Fossum
By Eric Fossum (Feb 27, 2013)

why 50 nm? What are your assumptions?
You are also forgetting about improvements in QE.

0 upvotes
plasnu
By plasnu (Feb 27, 2013)

I prefer left picture (conventional) to the right picture (hdr).

3 upvotes
WilliamJ
By WilliamJ (Feb 27, 2013)

You're right that photography is about playing with light, showing and hidding, stimulating the imagination by playing with the shadows. Showing everything is to photography what an instruction booklet is to litterature.

Ideally, a nice camera should propose the degree of Dynamic Range as do Fujifilm with its EXR. With an EXR camera, you can choose on a scale of 100%, 200% and 400%, which proves by the way that the Fujifilm engineers are not just "camera for fun" designers, but do really understand what photography is about.

1 upvote
xMichaelx
By xMichaelx (Feb 27, 2013)

This is "HDR" the same way that Britney Spears is speed metal. IOW, not at all.

It simply allows smaller, cheaper sensors to approach the dynamic range of larger, more expensive sensors.

1 upvote
Timmbits
By Timmbits (Feb 28, 2013)

@michael:
they'll use this in large sensors as well, of course!
don't you worry, the gap will remain the same.

Comment edited 4 minutes after posting
1 upvote
sportyaccordy
By sportyaccordy (Feb 27, 2013)

I feel like a lower pixel density could yield a much wider dynamic range. But if this tech lets us have the best of both worlds, I can't complain.

0 upvotes
Eric Fossum
By Eric Fossum (Feb 27, 2013)

I should mention here that I have been working with Rambus for a few years and they fund our Quanta Image Sensor R&D at Dartmouth, along with other projects elsewhere. I believe their strategy is to invest in R&D that yields fundamental IP in advanced areas. It is an interesting business model and one that actually supports innovation and technology development even if they are ultimately a non-practicing entity.

3 upvotes
Luke Kaven
By Luke Kaven (Feb 27, 2013)

If I recall correctly, they developed DDR RAM, is that right? I believe I saw them announce it about 20 years ago. It was of course state of the art at the time, and still in use today.

So I noticed Nikon's D4 has a 120k e- well capacity. Does it have deep wells, or do they use something like what RAMBUS has here, which looks more like a "flag and flush" for oversampling.

0 upvotes
Naveed Akhtar
By Naveed Akhtar (Feb 27, 2013)

No Luke! .. Rambus if its the same company developed Rambus RAM .. something very different to DDR; with very expensive licensing .. Intel got in huge trouble by adopting Rambus Ram .. as AMD adopted DDR and it got very competitive against Intel / Rambus partnership.

5 upvotes
Luke Kaven
By Luke Kaven (Feb 27, 2013)

Thanks for the clarification.

0 upvotes
Eric Fossum
By Eric Fossum (Feb 28, 2013)

There is only one Rambus. I think they have had a strategic change in the way they do business since their initial infamous foray. I like the new business model and support it. On the other hand, patent trolls (non inventive NPEs) are truly a parasite on our economy.

2 upvotes
Timmbits
By Timmbits (Feb 28, 2013)

they have to change since the patent laws were changed as a result of such practices.

@EF: it certainly makes one proud and is good for the ego to have one's institution's research and inventions to be validated and endorsed, but ultimately you are the suckers these companies prey on. ...unless of course, you tell us that you are getting shares in rambus in return, as well as royalties for any future licencing revenues.

2 upvotes
Eric Fossum
By Eric Fossum (Feb 28, 2013)

@Timmbits. Thanks for looking out for me. Fortunately I am fairly experienced in these matters.

0 upvotes
Wilmark
By Wilmark (Feb 28, 2013)

I would say The Medellín Cartell has an "interesting business model". Rambus are downright patent trolls at the highest order.

0 upvotes
Eric Fossum
By Eric Fossum (Feb 28, 2013)

With all due respect Wilmark, I don't think you really understand what a patent troll is. And even the most evil patent trolls out there are perfectly legal in what they do. Suggesting otherwise is just silly. Anyway, a patent troll does not spend money to develop new technology and teach it to others. Of course they want people to adopt the technology so they can generate licensing revenue, but it is an eyes-wide-open scenario. Just like Nikon wants you to buy their technology so they generate revenue. No trickery or underhanded activity going on there. And yes, I am aware of Rambus' infamous and unflattering past. I wouldn't work with Rambus of old.

1 upvote
Horshack
By Horshack (Feb 28, 2013)

@Eric, I agree that Rambus is not a traditional patent troll but what they did with JEDEC was worse IMO. Rambus stuck a knife into the heart of the tech industry, striking at its most vulnerable point where companies put aside their individual profit motives to collaborate on standards for the benefit of the whole industry, which should translate into profits for all unless a wayward participant decides to corrupt the process for personal gain like Rambus did.

Perhaps Rambus has turned over a new leaf as you imply, yet I still see them involved in legal pursuits over those original JEDEC patents. I personally could never trust them again.

Comment edited 2 times, last edit 4 minutes after posting
1 upvote
Wilmark
By Wilmark (Mar 4, 2013)

@ Hroshak, I bet he's being paid per word or per response. Lol we dont live in a court room. I recall the details very well. Intel should have made sure that they never do any business again. These kinds of companies hurt consumers all over by preventing progress and overall cooperation at least for the purpose of standards. If it wern't for AMD and Via we would have been paying loads more for memory today.

0 upvotes
Nigel_L
By Nigel_L (Feb 27, 2013)

Reminds me somewhat of a forum post I made a few years ago...

See http://forums.dpreview.com/forums/post/18895569

Regards, Nigel

2 upvotes
Eric Fossum
By Eric Fossum (Feb 27, 2013)

indeed Nigel. Ahead of your time, but I was ahead of you by "a bit" as were a few others.

1 upvote
Nigel_L
By Nigel_L (Feb 27, 2013)

Hi Eric, you are right - I also see some related comments from Roland Karlsson. Hopefully these ideas will translate into real products sometime soon.

Regards, Nigel

1 upvote
Timmbits
By Timmbits (Feb 28, 2013)

If this is indeed the same idea, then you have just invalidated Rambus' patent.
A patent is contingent of the invention not already being public knowledge or having been published in the public eye. A competitor could use your underwriting to challenge the patent and not have to pay licensing fees.
This would essentially become a race of first to market to gain an advantage, and legal battles between those with the deepest pockets.

2 upvotes
CyberAngel
By CyberAngel (Feb 28, 2013)

Nigel_L: even as preparing to start some studies and thus not affording much I'm willing to support YOUR idea.
I hope everyone here supports Nigel_L
Why?
Anything against the worst patent troll in the history!
Nigel_L
http://forums.dpreview.com/forums/post/18895569

Comment edited 21 seconds after posting
0 upvotes
Eric Fossum
By Eric Fossum (Feb 27, 2013)

I made some comments on this in the news forum yesterday including a reference to technical work they published. Bottom line, I am a big fan of binary pixels and oversampling (spatially and temporally) and believe this is where things are headed even for large sensor cameras.

I see there is some wrong information being circulated in this comments section so readers, beware!

4 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

at the moment a single reset will introduce delays

if one needed more than one reset, that makes it ineffective

it's better to have pixels capable of pixel level control (Canon has this already patented) to handle different ISO 'choices' pre-selected by the shooter, which allows for multi-ISO selection according to multiple levels of light (zone system of ISOs!!!) where no 'resetting delays' are introduced nor involved.

one then could have 'final image simulation' exposure chosen base on more than one ISO setting, each aimed at different light levels seen by the sensor (Natural DR ExpSim LV)

if not multi-ISO capability, at least dual-small-large-type-pair pixel sensors can be dedicated to handling both ends of the light extremes (bright vs dark) instantly with no resets and no delays

fujifilm already has dual-type pixels but only handle at single ISO levels chosen by the shooter (and fujifilm doesn't have exp sim lv at all anyway; making multi-ISOs impractical)

sdyue

Comment edited 3 minutes after posting
0 upvotes
tonywong
By tonywong (Feb 27, 2013)

If the reset delay is a known value, you can calculate the amount lost during the reset by the fill rate before (and after) and approximate the 'real' value with reasonable accuracy.

0 upvotes
TudorJenkins
By TudorJenkins (Feb 27, 2013)

That assumes a still image. Quite often one wants to capture motion effects and this approach will not allow that

0 upvotes
tonywong
By tonywong (Feb 28, 2013)

Even for motion effects it would work as long as the refresh rate was high enough.

0 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

this is exactly what i've been talking about...!!!

dual-type-pixels (binning), but using a smaller pixel for brighter light and larger pixel for lower light

meaning a 23Mp image is made from 46Mp sensor with 'dual-(small-big)-pixel-pairs'

sdyue

Comment edited 22 seconds after posting
0 upvotes
JRFlorendo
By JRFlorendo (Feb 27, 2013)

If that's the case, Rambus just infringed on Fuji's S5 pro's super ccd technology. I'm not sure though, seems like the same scheme.

Comment edited 2 times, last edit 2 minutes after posting
1 upvote
Peter G
By Peter G (Feb 27, 2013)

No, that isn't what this is. You are describing Fuji Dual sensor strategy they have been using for years.

This is not two sensors. It is one sensor, and a simply digital bit of storage to tell when the sensor "rolled over".

0 upvotes
MarkInSF
By MarkInSF (Feb 27, 2013)

Huh? No, it doesn't. Several companies have worked on your idea. What Rambus is proposing is also not a new idea, but they have some specific ideas on implementing it. Their pixels empty when they fill up, record that info, then start filling again. Each pixel works alone.

0 upvotes
chrisnfolsom
By chrisnfolsom (Feb 28, 2013)

You don't have to bin - since the problem is sensitivity and saturation you could have staggered triggers - preset the pixels with triggers of various amounts - say 4 bits so that when you expose you could possibly stop saturation of certain pixels by having them expose later in the cycle as opposed to others. You get into lot's of trouble, but at least with one exposure you could somewhat easily have more valid information on each pixel.

0 upvotes
Peiasdf
By Peiasdf (Feb 27, 2013)

Sadly the company is RAMBUS so it means this technology will be very expensive and likely replaced be something better and cheaper 3 months after release but RAMBUS will spend the next 5 years suing everyone to prevent the adaptation of the cheaper technology.

15 upvotes
D1N0
By D1N0 (Feb 27, 2013)

Great, more bland pics.

2 upvotes
Timmbits
By Timmbits (Feb 28, 2013)

you can always tell the camera to put in more contrast or whatever you like in the settings... but the other way around isn't possible as we all know.

0 upvotes
D1N0
By D1N0 (Feb 28, 2013)

try shooting RAW

1 upvote
Myari
By Myari (Feb 28, 2013)

D1N0 is known to post idiotic comments like these. Yes, we are talking about RAW already, genius! The topic is sensor itself.

Comment edited 36 seconds after posting
3 upvotes
D1N0
By D1N0 (Feb 28, 2013)

Myari you must be an idiot. Calling me an idiot is not a very wise thing to do you know.

Comment edited 2 minutes after posting
0 upvotes
Funduro
By Funduro (Feb 27, 2013)

This new technology can/might be a game changer. It's "HDR" abilities will create some great looking images in a PnS or smartphone.

0 upvotes
Charles C Lloyd
By Charles C Lloyd (Feb 27, 2013)

Good to see some actual innovation in the sensor arena. Far too much of digital photography hitherto has been in replacing film, but the opportunities in digital extend well beyond that. This is a great idea and I see no reason why they couldn't reset the pixels more than once -- just keep track of the number of resets and add enough bits to count the resets. Two bits gives you four resets, that should do it.

2 upvotes
bobbarber
By bobbarber (Feb 27, 2013)

Would more than one reset be necessary? I'm trying to think of a situation.

0 upvotes
peevee1
By peevee1 (Feb 27, 2013)

1 bit to count resets = 1 EV expansion of DR. 4 bits (16 resets) = 4EV. Then current 10EV P&S sensors will match FF.

0 upvotes
LightBug
By LightBug (Feb 27, 2013)

Watch out camera industry, lawsuits are coming!

4 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

actually the idea of pixels handling more than one level of light (or more than one sensitivity) has been around awhile, and certainly to have already been patented by camera manufacturers for awhile

even pixel level control of exposures via multi-ISO capability, too (several years ago)

so, their patent may simply be a variational workaround to avoid other patent conflicts. if not, they're also going to get lawsuits too...

no one is going to use an idea if it isn't ideal for mfrs

sdyue

Comment edited 34 seconds after posting
0 upvotes
JordanAT
By JordanAT (Feb 27, 2013)

I'm not sure this is so fabulous for most "small sensor" cameras, except that the bulk of the sensors produced are of that type. I generally don't have a driving need for expanding the bright side of the dynamic range in 1/2.3 (or smaller) cameras - what I really need is more sensitivity in dark scenes.

While you could claim that this allows getting more light to the sensor without blowing highlights...yes - sort of. Most exposures are limited by absolute duration (shake/subject movement) and not the fear of clipping on highlights.

1 upvote
Timmbits
By Timmbits (Feb 28, 2013)

good point.
we get blown highlights as a result of increasing sensitivity to better distinguish the darker hues.
another approach may be to implement variable sensitivity across the sensor for a given image - this could achieve the same result, and could be implemented in firmware/software today.

0 upvotes
chrisnfolsom
By chrisnfolsom (Feb 28, 2013)

We have the BSI process - is there much more room to increase the sensitivity of silicon? I saw a demonstration of a "nano" (way over used term) sensor that was supposed to be much more sensitive, but have not seen anything lately... Other then using a lense to gather more light it seems like we have hit a limit.

Comment edited 24 seconds after posting
0 upvotes
kshorter
By kshorter (Feb 28, 2013)

Expanding the highlights equals expanding the shadows. Its really about how many tones fit into your total dynamic range. Add more range at the top and then re-bias the whole scale so mid of your dynamic range is mid tone and then you have more range on both ends. Then because you've effectively pulled the dark regions up it comes to the point of how good your signal to noise is.

0 upvotes
lost_in_utah
By lost_in_utah (Feb 27, 2013)

More like US Patent Troll.

4 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

agree

but that's the nature of patents... that is, every variation is 'legit'...

sdyue

0 upvotes
forpetessake
By forpetessake (Feb 27, 2013)

"but that's the nature of patents..."
Not at all, the idea of patents was to prevent unfair competition by stealing somebody's ideas. Patent trolls are basically in a business of planting minefields hoping somebody steps on them. The former is constructive the latter is destructive.

3 upvotes
Timmbits
By Timmbits (Feb 28, 2013)

@SteveDYue, re: "every variation is legit"
please read up on patent law, more specifically the statute of equivalencies.
someone who can afford to sue will quickly blow you out of the water if you commercialize a work-around.

0 upvotes
AbrasiveReducer
By AbrasiveReducer (Feb 27, 2013)

If this can hold the highlights without flattening everything (HDR Smokevision effect) it would be great and a shame to waste on cellphones.

2 upvotes
Timmbits
By Timmbits (Feb 28, 2013)

as has been mentioned it can be used in any size sensor

0 upvotes
chrisnfolsom
By chrisnfolsom (Feb 28, 2013)

It's amazing what drives innovation, but you can see the "reverse" usage of products into high performance products - portable chips used in servers, 2.5" hard drives being used in servers - if it was not for the explosion of portable computers those devices would not have been developed. I hope that pushing the small sensor "quality" will give great rewards to all of us larger format users - I am just annoyed at not being able to watch my children's shows, or look at the Grand Canyon because a sea of iphones, ipads and such are being held up in my way by people ruining my moment to get a crappy copy of their moment....really, an Ipad/iphone at 50 ft? what in the hell can you see?

0 upvotes
plasnu
By plasnu (Feb 27, 2013)

Is this something like floating point?

Comment edited 10 minutes after posting
1 upvote
forpetessake
By forpetessake (Feb 27, 2013)

This is essentially the same idea as multiple exposures using electronic shutter. For example, you take 4 normal exposures and merge them into a single image, you get 2 times better SNR (and dynamic range) and effectively pushing ISO 4 times lower. You can do it today with cameras like Sony NEX, except the shutter is not electronic, it's mechanical, so there is problem with moving subjects.
On the subject of the dynamic range. The displays and prints have a lot more limited dynamic range than modern sensors. In order to display higher dynamic range you need to compress it, the more you compress, the less natural image looks. Until displays with much better dynamic range are built, increasing dynamic range of the sensor has little advantages.

2 upvotes
Peter G
By Peter G (Feb 27, 2013)

No it isn't like that at all.

It will just take one normal length exposure. Only what would be the formerly blown out pixels will capture additional info, but it will still be during the regular exposure time.

2 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

taking multiple exposures defeats the whole point

key is doing it in a single take

but for me, sensors capable of pre-setting mutliple ISOs for different pixels according to light levels makes more sense

sdyue

1 upvote
forpetessake
By forpetessake (Feb 27, 2013)

'It will just take one normal length exposure.' -- it is and it isn't. It's done by reducing ISO of the sensor (compared to traditional implementation), after that it's normal, but exactly the same thing happens with multiple exposures, except all pixels are reset, not just those that otherwise would be saturated.

'key is doing it in a single take' -- it's defacto electronic shutter, what is one take? and who cares?

1 upvote
Revenant
By Revenant (Feb 27, 2013)

At the pixel level it really is multiple exposures, the resetting of the pixel acting as an electronic shutter. But at the sensor level it's just one continuous exposure, since not all pixels are reset at the same time.

0 upvotes
chrisnfolsom
By chrisnfolsom (Feb 28, 2013)

I always though using a shutter like the the TI DLP chips to limit the light to certain pixels would be great although I think there are better electronic ways to do it and you STILL have the issue of creating MORE sensitivity to either create shorter exposures, or increase sensitivity in low light. You can't "make" light so until you can sense it better you are just juggling the same problems around.

0 upvotes
Peter G
By Peter G (Feb 27, 2013)

Essentially a counter and reset for the pixel bucket.

Since they call it binary, I will assume that for now, the counter is essentially just one bit.

You can probably do this with just a few transistors, that will trip automatically when the pixel bucket hits full. It sets one bit cleans the bucket, and start collecting again.

One of those obvious in hindsight ideas that should really work out well.

I am just sad that patent troll Rambus thought of it first.

4 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

actually, they've only thought of a variation of it, not exactly the first.

there are other ways to do this, before they even thought of it 'first'.

and nothing wrong with 'patent variation' as this is the way patents work

sdyue

Comment edited 11 seconds after posting
0 upvotes
Roland Karlsson
By Roland Karlsson (Feb 27, 2013)

I see no technical explanation on how it works. Any one knows? Or have a pointer?

0 upvotes
Eric Fossum
By Eric Fossum (Feb 27, 2013)

Vogelsang, T.; Stork, D.G.; , "High-dynamic-range binary pixel processing using non-destructive reads and variable oversampling and thresholds," Sensors, 2012 IEEE , vol., no., pp.1-4, 28-31 Oct. 2012
which builds on work by Vetterli et al at EPFL (gigavision camera), and which, to some degree, is related to my work.

1 upvote
Roland Karlsson
By Roland Karlsson (Feb 28, 2013)

Thanx

0 upvotes
Karroly
By Karroly (Feb 27, 2013)

I may be wrong, but I think the idea behind that can be explained as follows :
A photosite can be seen as a bucket that is being filled up with electrons when exposed to light. Overexposure occurs when the bucket overflows.
But filling the bucket is not instantaneous. It looks to me like Rambus brings up a new technology that allows to monitor the bucket level. Then it is possible to empty ("reset") the bucket (and memorize it was filled up once, and maybe more than once) and restart filling it until the shutter closes.
The final electrical level corresponding to the total amount of light received by the photosite is then the sum of as many as necessary full buckets and the last partially filled bucket.
Highly sensitive photosites are quickly saturated. But with this new technology, saturation is no longer a problem.
So the advantages are both in lowlight capability and dynamic range.

1 upvote
Clear as Crystal
By Clear as Crystal (Feb 27, 2013)

Yes thats a good analogy for how a pixel works. One thing that has just struck me though is that twice the number of possible charge levels would mean twice the information that needs to be stored meaning twice the file size.

0 upvotes
Peter G
By Peter G (Feb 27, 2013)

Twice the charge levels, is actually only one more bit of storage per pixel and since files sizes are often already deeper than actual dynamic range, no real file format change is really needed.

But files will likely be a little bit less compressible because they will contain a bit more data.

1 upvote
Clear as Crystal
By Clear as Crystal (Feb 27, 2013)

Well spotted, I stand corrected.

0 upvotes
Clear as Crystal
By Clear as Crystal (Feb 27, 2013)

Sounds a great idea. Only problem I can see is if the time to reset the pixel is significant compared to the exposure time. In that case the pixel wouldn't gain any extra charge during the reset and this would leave a plateau in the signal before increasing again, giving a lower value than it really should be.
Nothing says it needs to stop at one reset either. If this works consistently it could be a really impressive next step for sensors.

3 upvotes
Peter G
By Peter G (Feb 27, 2013)

Reset time is an issue, but I suspect it isn't significant. You could also apply a small correction factor to help with that anyway.

You could do more than once, but it increases the circuit complexity per/pixel for what is like quickly diminishing returns except in extreme HDR photography.

0 upvotes
Clear as Crystal
By Clear as Crystal (Feb 27, 2013)

The way I see it resetting the pixel would work well as long as it has the chance to reset and start gathering more electrons. As you say in that case you just add a correction factor, the problem would be if the reset time was significant and the pixel was reset but hadn't started the new collection yet. In that case you cant add a correction since you don't know how many to add.

Perhaps an alternative is to record the time needed to reach full then use that to give an estimated value for the full exposure time. Rambus if your listening feel free to make me an offer for using that idea :)

1 upvote
Sdaniella
By Sdaniella (Feb 27, 2013)

actually their patent relies on a single type pixel rather than dedicated two-type pixel pairs

any resetting is inefficient compared to 'single-setting-first-time' depending on light levels of dedicated dual-pair type pixels, which means a new dual-(small-large)-pair type pixel sensor is required instead (which is no more difficult to manufacture than a single one that needs to 'reset')

in a dual-pair type pixels i'm thinking of:
when exposure starts, brightly lit areas are instantly handled by smaller pixels and poorly low lit areas are instantly handled by larger pixels, so there is no need for 'resetting' at all in time, as both happen at the beginning moment of exposure without delay

if anyone is thinnking of this, Canon is most likely already doing this, but deciding whether to fully release it or not (they're very likely already testing it for awhile (as may be others, like Fujifilme))

sdyue

0 upvotes
MarkInSF
By MarkInSF (Feb 27, 2013)

Two different kinds of pixels introduces spatial problems. A single pixel is better. Whether it empties and refills or just records time to fill does have major effects on longer exposures where there may be movement or changes in light during the exposure. Think of flash, and the decisions already made on whether to use flash at beginning or ending of exposure. Neither would work right if all you record is time for pixel to fill.

Comment edited 17 seconds after posting
0 upvotes
dengx
By dengx (Feb 28, 2013)

Steve - Fujifilm had this going for years.
Earlier in the SCCD SR sensors and now in the EXR sensors.

They no longer produce the large sensor cameras with such sensors, only the compacts for various reasons though.

Two types of SuperCCD SR:
http://www.dpreview.com/news/2003/1/21/fujisuperccdsr
http://www.dpreview.com/reviews/fujifilms5pro/

EXR explained:
http://www.dpreview.com/reviews/fujifilmf200exr/2

What RAMBUS proposes is just a variation that uses only one pixel with resetting its state.
I imagine that it will cause problems different to Fujifilm approach.

0 upvotes
samhain
By samhain (Feb 27, 2013)

...Are they publicly traded?

0 upvotes
Mescalamba
By Mescalamba (Feb 27, 2013)

Its quite interesting, but given Rambus history of "patents" (they are great patent trolls) I wouldnt put much faith into it.

Ofc in theory it should work..

5 upvotes
shaocaholica
By shaocaholica (Feb 27, 2013)

Resetting a photosite mid exposure is still 2 exposures.

0 upvotes
Rachotilko
By Rachotilko (Feb 27, 2013)

But not all of them at the same time ! That's the trick ...

- in conventional sensor, there is one readout time

- in Fuji EXR, there are two different readouts for two groups of the sensor pixels.

- in this Rambus tech, each pixel is reset when it needs it. Much more efficient use of the sensor area than the Fuji approach

Comment edited 2 times, last edit 1 minute after posting
2 upvotes
Sdaniella
By Sdaniella (Feb 27, 2013)

agree, and the hardware to do the resetting (which introduces delay) is just as complicated as having two different pixels (fujifilm)

the hardware for 'one resettable pixel' is like have two different pixels anyway, but introduces resetting delay inefficiencies

sdyue

1 upvote
MarkInSF
By MarkInSF (Feb 27, 2013)

More complex, but far more useful. A pixel could reset precisely when it needs to. Possibly several times, if you let it.

0 upvotes
Tazz93
By Tazz93 (Feb 27, 2013)

So it sounds like they finally found a way to do 'native' HDR blending/variable pixel exposure on the sensor. I've often wondered what types of challenges there was to that, hopefully someone will explain.

0 upvotes
OniMirage
By OniMirage (Feb 27, 2013)

Wow this is going to be fantastic and I hope all sensors go this route.

0 upvotes
Total comments: 193
12