Skin colour in 24MP DX cameras

Started Jun 14, 2013 | Discussions thread
(unknown member) Contributing Member • Posts: 650
Good Question! Among many.

mistermejia wrote:

windsprite wrote:

FTH wrote:

With cmos, I need to isolate first every different NEF series depending on the lighting condition, make a preset for each, and batch process before fine tuning the images. CCD has his issues too but if Nikons were using modern CCD sensors, they would offer something unique for wedding and portrait photographers. s 24MP Full frame CCD (with video mode, no rolling shutter issue) would be awesome but this will never happen.

According to this poster, any differences in color rendering are not due to CCD vs. CMOS. A short quote:

There's no such thing as "CMOS color" or "CCD color". CMOS and CCD both use silicon photodiodes, and they have identical monochrome responses. The "color" comes from the organic filters that are screened onto the naturally monochrome chip, and those are the same regardless of whether the chip is CCD or CMOS.

There are differences in the organic filters used by different manufacturers, but those different filters could be applied just as easily to either a CCD or CMOS sensor.


So if is not the sensor itself could it be so dificult or expensive for Nikon to put a simple filter to improve this issue, since this filters can "easily" be applied to any CMOS sensor?

Has Nikon (or Toshiba) already provided a different filter on the D7100 for better looking tanish skins? Apparently a Nikon V1 skin tones look better than a D7000. Why is that?

We need to talk with a Nikon engineer!

As I understand it, "CCD" and "CMOS" actually refer not to the photo-sensing capacitor or photodiode of each pixel that actually translates received light into a charge state, but rather, to the underlying technology by which that charge state is converted to a voltage quantity that can be remembered or processed.

With a CCD sensor, after you've snapped your photograph and the sensor's exposed, collected charges for each pixel "shift" down rows and columns, where they're read by a single amplification circuit for the row (or column) that converts them, one by one, into a voltages that can be stored or processed.  Each pixel gets "dumped" this way.   ("CCD" = "Charge Coupled Device," which describes the "shift" of charges down each pixel "bucket".)

By contrast, in a CMOS or "active pixel" sensor, a distinct amplification circuit is a part of each pixel, coupled directly to the sensing photodiode or capacitor.  So the voltage conversion process happens on exposure, per pixel, and image data can be stored or processed right away, even on the same integrated circuit as the sensor itself.   ("CMOS" = "Complementary Metal Oxide Semiconductor," which refers to the design style of the circuitry, in which pairs of transistors are balanced to perform logic and gating functions.)

Just looking at the raw architecture of it, it's not entirely clear (to me?) why images from a CCD sensor would look different than those from a CMOS sensor.   The CMOS advantage of being able to integrate photosensing, reading, storing, and processing functions on a single silicon chip makes plenty of sense from a production and design standpoint.  Smaller, lighter, faster, more efficient etcetera.

So, Joe Nikon Engineer, if you're out there, we'd love to hear from you!

Post (hide subjects) Posted by
MOD Mako2011
MOD Mako2011
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow