Why don't camera reviewers measure the "sensor"?

saltydogstudios

Veteran Member
Messages
3,608
Solutions
1
Reaction score
2,200
Location
New York, NY, US
It's always weird to me when camera reviewers say "we can't report on the sensor because Adobe hasn't included this camera yet" and then say "but it's the same sensor that's in x camera".

But all they end up measuring is dynamic range and high ISO noise performance.

Specifically, they don't measure how a frequency of light gets translated into the RAW file.

I know the equipment needed to do this well is expensive - a light source that can transmit a single frequency of light so it can measure the response at the sensel level (each red or green or blue pixel on the sensor).

But there are other ways - diffraction grating can create a rainbow, or colored filters that have a known characteristic that only allow through certain frequencies of light. Wratten "color separation" filters in particular.

I know the prevailing consensus is that that RAW files are relatively unbiased and that camera mismatches at the CFA -> RAW conversion process can be reduced via post processing, and that indeed Adobe and other RAW conversion software attempts to do this by measuring each camera and creating various color profiles.

But I would be genuinely interested in seeing how each camera handles color separation on the CFA -> RAW level.

My limited understanding here is that - say - Nikon and Leica cameras allow the "red" sensels to pick up "blue" frequency light, and Canon cameras have cleaner separation of red and blue. I'm sure each camera manufacturer has a reason for this - taking the light that exists in the world and turning into an RGB image (JPG usually) that we can view is a complex process.

And that camera manufacturers purposefully weaken the CFA because it'll give a better high ISO noise performance - more photons getting through the CFA means less noise - but at the expense (to some extent) of color separation.

And camera manufacturers "cook" the RAW files - either to achieve the colors they want or to reduce variation between batches - since creating the CFA is a chemical process, there is some potential for variation between batches.

I know in the real world - very few people care about this, and that there isn't much we can do as the end user with this information, but exploring this could lead more people to care about it, and we can start drawing correlations between pigments in the CFA and the resulting images we get. Again - even if the prevailing wisdom is that this particular thing doesn't matter much.

Maybe with the rising popularity of monochrome only cameras we can open the discussion more about how the CFA -> RAW process happens with more interest. Until we get these measurements, perhaps the whole idea is a bit theoretical, but I suspect that if a thorough review of cameras was made, we'd find that there are real world implications to this - for, say, ETTR exposure - trying to maximize the amount of information we gather at the sensor level, and showing that there are differences between cameras & the philosophy behind color that each camera manufacturer imbues at the sensor level.

I'm sure I'll get a lot of responses that these differences are minimal and that anyone who shoots RAW can get nearly identical results from different cameras with "proper" post processing, but that's the exact sentiment I suspect could be unwound with lots of testing and measurement.

If anyone does know of someone who does this sort of measurement, I'd love to hear about it.
 
It's always weird to me when camera reviewers say "we can't report on the sensor because Adobe hasn't included this camera yet" and then say "but it's the same sensor that's in x camera".

But all they end up measuring is dynamic range and high ISO noise performance.

Specifically, they don't measure how a frequency of light gets translated into the RAW file.

I know the equipment needed to do this well is expensive - a light source that can transmit a single frequency of light so it can measure the response at the sensel level (each red or green or blue pixel on the sensor).

But there are other ways - diffraction grating can create a rainbow, or colored filters that have a known characteristic that only allow through certain frequencies of light. Wratten "color separation" filters in particular.

I know the prevailing consensus is that that RAW files are relatively unbiased and that camera mismatches at the CFA -> RAW conversion process can be reduced via post processing, and that indeed Adobe and other RAW conversion software attempts to do this by measuring each camera and creating various color profiles.

But I would be genuinely interested in seeing how each camera handles color separation on the CFA -> RAW level.

My limited understanding here is that - say - Nikon and Leica cameras allow the "red" sensels to pick up "blue" frequency light, and Canon cameras have cleaner separation of red and blue. I'm sure each camera manufacturer has a reason for this - taking the light that exists in the world and turning into an RGB image (JPG usually) that we can view is a complex process.

And that camera manufacturers purposefully weaken the CFA because it'll give a better high ISO noise performance - more photons getting through the CFA means less noise - but at the expense (to some extent) of color separation.

And camera manufacturers "cook" the RAW files - either to achieve the colors they want or to reduce variation between batches - since creating the CFA is a chemical process, there is some potential for variation between batches.

I know in the real world - very few people care about this, and that there isn't much we can do as the end user with this information, but exploring this could lead more people to care about it, and we can start drawing correlations between pigments in the CFA and the resulting images we get. Again - even if the prevailing wisdom is that this particular thing doesn't matter much.

Maybe with the rising popularity of monochrome only cameras we can open the discussion more about how the CFA -> RAW process happens with more interest. Until we get these measurements, perhaps the whole idea is a bit theoretical, but I suspect that if a thorough review of cameras was made, we'd find that there are real world implications to this - for, say, ETTR exposure - trying to maximize the amount of information we gather at the sensor level, and showing that there are differences between cameras & the philosophy behind color that each camera manufacturer imbues at the sensor level.

I'm sure I'll get a lot of responses that these differences are minimal and that anyone who shoots RAW can get nearly identical results from different cameras with "proper" post processing, but that's the exact sentiment I suspect could be unwound with lots of testing and measurement.

If anyone does know of someone who does this sort of measurement, I'd love to hear about it.
Thanks for the well thought out questions!

Sorry that I don't have answers, but I did want to jump in to express appreciation for opening this topic. I do believe you are accurate that the design choices that manufacturers make on how images get processed in their hardware pipeline after the sensor IS affecting the RAW file they output, and more than what is admitted or discussed.

I will be following with interest!
 
Does this site give you something related to what you're looking for:

https://www.photoreview.com.au/

Example review with colour accuracy information here:

https://www.photoreview.com.au/revi...rrorless-cameras-m43/om-1-test-results/#TESTS

(You may need to click on the TESTS tab just below the title "OM-1 Test Results").
This is close, and very much seems geared towards practical applications - since anyone serious about color has some sort of test card that they can use to calibrate cameras. They do convert the files to RGB images prior to the test - a process that will evolve with software updates and depend on the specific imaging processing pipeline.

Ideally, I'm looking for something along the lines of what I've done here


Basically creating pigments - reflective (lit from in front) or transmissive (lit from behind) that are designed to produce a known spectrum of light. A color checker does this, but isn't designed for color separation - produce "red" with no "blue". If this could produce light whose frequency range has a hard pass for just certain frequencies of light while blocking others, it could work.

A diffraction grating - a sort of prism that produces a rainbow - could also be used as I believe those colors would be close to pure wavelengths of light. Then you could take the RAW file and say - x nanometers of wavelength, how much does each of the red or green or blue sensels get triggered.

I tried to make this work for myself, but the resulting images were too variable to make meaningful comparisons - I'd likely have to 3d print some tools to get something that produced a consistent output, and I don't make money reviewing cameras so - I gave up after a few hours of playing.

A color checker or similar calibration tool could work - how much does the RAW file "blue" get triggered by the "red" swatch, but since the is meant to contain some green and blue you'd only be able to measure comparatively across cameras and not accurately measure the actual curve of the response of each pigment on the CFA across the spectrum.

I dunno - maybe because the equipment to do this is so rarified and nobody seems to do this means nobody cares.
 
<snip for brevity>
Thanks for the well thought out questions!

Sorry that I don't have answers, but I did want to jump in to express appreciation for opening this topic. I do believe you are accurate that the design choices that manufacturers make on how images get processed in their hardware pipeline after the sensor IS affecting the RAW file they output, and more than what is admitted or discussed.

I will be following with interest!
Thanks! This has been an obsession of mine for years. It seems to be the one mystery at the heart of digital photography that nobody wants to talk about.

Phase One is one of the few companies I've come across to address this directly with their Trichromatic sensor - which is far too expensive for me to get my hands on.

My suspicion is - Nikon purposefully allows the blue sensel to respond to red light (I got the text reversed in my post) to produce more even, pleasing skin tones - the blue rods & cones in the fovea "see" some red spectrum light. While Canon separates them more to produce more accurate sky colors or something similar. Honestly I'm not sure of anyone's motivations here other than to produce colors that they prefer - I'm not sure if anyone is prioritizing skies over skin or vice versa.

I suspect this is also why the Fuji X-Trans1 sensor is different from their X-Trans2, and that there's a strong color filter array in their GFX cameras - meaning little crossover between the frequencies the pigments transmit.

I remember maybe over a decade ago, there was an article where an Adobe engineer said that it was obvious certain camera manufacturers "cook" their RAW files - a small multiplication to the values coming off of the sensors that showed up as posterization in the resulting RAW file. E.g. you'd never see a red value of "101" only "100" or "102" because of the multiplication happening.

This led to a tiny drop in the amount of information (not all values are represented) but produced colors more characteristic to that camera manufacturer.



I guess the thing that frustrates me is - I don't know why any camera manufacturer makes their choices, and I don't know how the resulting image is affected, and I don't know why nobody talks about it when it could be a competitive advantage.

All I know is - I prefer Nikon for natural light portraits, Canon for Studio and Olympus for sunsets and I can't tell if it's my own personal bias or if there if this is part of the camera manufacturer's intent.
 
Take a look at DxO which measure the sensor directly including at least some of the measurements you're looking for.

 
My suspicion is - Nikon purposefully allows the blue sensel to respond to red light (I got the text reversed in my post) to produce more even, pleasing skin tones - the blue rods & cones in the fovea "see" some red spectrum light.
My understanding is that color crosstalk is an inherent issue of the sensel, and that the Bayer filtration and microlens layers on top, and even BSI vs FSI can mitigate or exacerbate this.

Nikon's Bayer filtration has changed considerably over the years (moving towards less filtration) and across models (the D# models had a different RB balance than the D### and D#### models). But Nikon also does what is called white balance preconditioning, which is a modification of the R and B values prior to writing the DNs that are in the file.

Every camera maker has their own "secret sauce" in terms of how they manage color, and it generally a layered approach (filtration, DN adjustment, spectral handling of the SoC, etc.). They all have different "targets" (and sometimes the targets move, as in Nikon's case). They generally won't talk about this except in generalities, because they all believe that they are doing the old Kodak/Fujifilm thing of perceptual differentiation (mostly just amplified marketing messages, though).

Moreover, raw converters come into play, too. Both the demosaic and what you do with its information comes into play. I don't know if it's still true, but for a long time Adobe was doing two "known light known source" chart tests with cameras, one in tungsten light, one in natural light, and making some sort of assumption that color response was linear both between and outside those lighting conditions (it might not be).
 
My suspicion is - Nikon purposefully allows the blue sensel to respond to red light (I got the text reversed in my post) to produce more even, pleasing skin tones - the blue rods & cones in the fovea "see" some red spectrum light.
My understanding is that color crosstalk is an inherent issue of the sensel, and that the Bayer filtration and microlens layers on top, and even BSI vs FSI can mitigate or exacerbate this.
I thought sensor design was getting better and better at this. Microlenses and well architecture directing more of the photons down the photo well, reducing cross talk or photons getting lost on the side of the well. I'm sure it's something that needs to be constantly refined as other parts of the sensor design change.
Nikon's Bayer filtration has changed considerably over the years (moving towards less filtration) and across models (the D# models had a different RB balance than the D### and D#### models). But Nikon also does what is called white balance preconditioning, which is a modification of the R and B values prior to writing the DNs that are in the file.
Okay I did a bunch of reading based on some of the terminology you used and the sense I'm getting is that there's a vast chasm of difference between shooting a color checker and actually taking sensor measurements, especially if we take white balance into account.




In essence - you can't isolate a part of the image processing pipeline and measure it discretely since the whole pipeline is necessary to create the resulting image and each step of the way assumptions are made about what the target is, and how to work with the underlying data.

Yet - I still wish more camera reviewers would try.

And then there are the "hue twists" where camera manufacturer (and likely Adobe/Capture One) color profiles shift hue along with luminosity which complicate things even more.

Source: http://sodium.nyc/blog/2019/12/camera-calibration-can-it-eliminate-differences-betweencameras (yes it's my own blog, the original source for the data is credited in the blog)

Source: http://sodium.nyc/blog/2019/12/camera-calibration-can-it-eliminate-differences-betweencameras (yes it's my own blog, the original source for the data is credited in the blog)
Every camera maker has their own "secret sauce" in terms of how they manage color, and it generally a layered approach (filtration, DN adjustment, spectral handling of the SoC, etc.). They all have different "targets" (and sometimes the targets move, as in Nikon's case). They generally won't talk about this except in generalities, because they all believe that they are doing the old Kodak/Fujifilm thing of perceptual differentiation (mostly just amplified marketing messages, though).
Agreed. What I'm wondering is why this isn't talked about more in this & other photography forums. Maybe in the geeky corners of this forum, but people get so territorial about what they believe they know it tends to devolve into arguments.

What annoys me about these arguments is that they usually end up blaming the photographer - "if you can't get good results from the raw, it's because you lack skill, not because the camera makes it difficult for you to get the look you want".

What's SoC - I found definitions for most of the other acronyms you use but not that one.
Moreover, raw converters come into play, too. Both the demosaic and what you do with its information comes into play. I don't know if it's still true, but for a long time Adobe was doing two "known light known source" chart tests with cameras, one in tungsten light, one in natural light, and making some sort of assumption that color response was linear both between and outside those lighting conditions (it might not be).
Interesting. I thought Adobe put more work into it than that to create their "Adobe Standard" and "Adobe Color" profiles.

Are you sure you're not referring to the "dual illuminant" profiles you can create from Color Checkers? I used them quite a bit for a while - creating various profiles and flipping back and forth between them in Adobe Camera Raw until I found one that I liked as a starting point for editing.

Often I found that I preferred - say - Nikon's Neutral profile and would just start from the JPG, remove some blemishes and call it a day. Which - according to one of the links above does a reasonably good job of not deviating too much from a color checker.

One of the things I disliked about the "if you calibrate your camera, the differences between manufacturers disappears" argument is that a color checker is just a few semi random dots in a huge color space, and it says nothing about how much the values in between were being pushed and pulled in order to achieve the end results. Are they linear or non-linear transformations? And do those linear or non-linear transformations actually make the colors in between "more accurate".

For example I noticed that between the Adobe, Camera and Color Checker colors, the white balance shifts, which theoretically shouldn't happen, but makes sense with a linear scaled color calibration - move enough colors and the center shifts.

Not to mention how much the CFA and "cooked" RAW files respond to these transformations. If the crossover between green and red / green and blue has shifted due to stronger or weaker CFAs, how does that affect the scaling of data between points on a colorchecker?

So tl;dr - what I'm proposing is somewhat simplistic, and there's not a ton of interest except for folks who've crossed over the vast chasm of knowledge, but by the time you get to that point the simplistic measurements I'm referring to are too simplistic.

--
"no one should have a camera that can't play Candy Crush Saga."
Ye olde instagram: https://www.instagram.com/sodiumstudio/ (will probably still be around after April 10th)
 
I thought sensor design was getting better and better at this.
Yes, over time it has. The original DSLR image sensors needed very telecentric light hitting the sensor surface. Then they added a microlens layer, which can do some redirection. Then we switched to BSI, which means the light is captured at the "top" of the sensor, not the bottom, deep inside a tunnel. But BSI also added crosstalk spillage, so some sensors now have "posts" that stick up beyond the top of the BSI layer. In smartphones especially, we now have trenches between photosites so that electrons can't simply migrate to the adjacent cell. The list goes on...
Okay I did a bunch of reading based on some of the terminology you used and the sense I'm getting is that there's a vast chasm of difference between shooting a color checker and actually taking sensor measurements, especially if we take white balance into account.
That's correct. This is why folk like Iliah Borg and I have been asking for spectral response information from the camera makers for over 20 years now, and why others spend the time and money to do at least basic spectral analysis themselves. Then you also have the issue with Nikon of white balance pre-conditioning, where there's an electronic adjustment to R and B values prior to writing the DNs into the raw file.
In essence - you can't isolate a part of the image processing pipeline and measure it discretely since the whole pipeline is necessary to create the resulting image and each step of the way assumptions are made about what the target is, and how to work with the underlying data.
I bolded one of your words. We can simplify a bit by taking the imaging ASIC out of the equation and just using the raw data. I know that Iliah and Alex spend a lot of time analyzing this, and good reviewers will spend some time using their RawDigger software to at least get an impression of what's happening in the DNs the camera created.
Yet - I still wish more camera reviewers would try.
Costly, time consuming, few would benefit from the results. Those that would benefit will tend to do their own analysis using the tools that are available.
And then there are the "hue twists" where camera manufacturer (and likely Adobe/Capture One) color profiles shift hue along with luminosity which complicate things even more.
Hue twists date back to film. Technically, I don't know of a film stock that was "neutral" when it came to color. At Backpacker, we standardized on Fujifilm Provia because it had the fewest problems that the Rodale color team had to try to deal with. Some films, Kodachrome and Velvia come to mind, had big swings of color information.

The problem when running an organization that needs "good color" across a lot of different images, is that when the image sources all come from different places and using different film/digital, you can't run an image from X next to Y without doing something about the differences. Nat Geo, SI, and Backpacker (under my management) were absolutely anal about this. Others, not so much. The most anal tended to be the old mail-order catalog firms, because if the color in the catalog didn't match exactly what you received in your order, the number of returns went up, sometimes way up.
Agreed. What I'm wondering is why this isn't talked about more in this & other photography forums. Maybe in the geeky corners of this forum, but people get so territorial about what they believe they know it tends to devolve into arguments.
Before the "closure" of dpreview (;~), the Photographic Science and Technology forum was the place to discuss this stuff.
What's SoC
System on a Chip. An older acronym would be ASIC (Application Specific Integrated Circuit). The change reflects that the latest "imaging engines" in cameras are really full systems, much like the smartphones and Apple Silicon now use. EXPEED7 includes four ARM cores, GPU cores, memory, I/O circuitry, and dedicated IP engines (e.g. intoPIX's technology).
Interesting. I thought Adobe put more work into it than that to create their "Adobe Standard" and "Adobe Color" profiles.
Are you sure you're not referring to the "dual illuminant" profiles you can create from Color Checkers?
Same thing. In one case the converter company is doing it for you, in the other case you have to do it ;~).

There's more to converter profiles than just the color data curves that are applied from the source data. Converters do all kinds of things behind the scenes that you don't know about, or don't have details of. The demosaic engine intersects with the color data intersects with the underlying color model you're using intersects with the Color Space intersects with...well, you get the idea.
Often I found that I preferred - say - Nikon's Neutral profile
Nikon's Neutral Picture Control is indeed much what it's name suggests. The problem is that the "average viewer" prefers more contrast and saturation, and then you might want to flip colors a bit to account for the prevalence of color blindness.
One of the things I disliked about the "if you calibrate your camera, the differences between manufacturers disappears" argument is that a color checker is just a few semi random dots
Well, there are different color checking charts out there. The problem is that the more data points you want to use in your "model," the more complex (and slower) the math becomes. Do you want to be Six Sigma in color?
in a huge color space, and it says nothing about how much the values in between were being pushed and pulled in order to achieve the end results. Are they linear or non-linear transformations?
This gets to the color model that the raw converter uses, which like the spectral characteristics of the sensor, no one discloses.
 
It's always weird to me when camera reviewers say "we can't report on the sensor because Adobe hasn't included this camera yet" and then say "but it's the same sensor that's in x camera".

But all they end up measuring is dynamic range and high ISO noise performance.

Specifically, they don't measure how a frequency of light gets translated into the RAW file.

I know the equipment needed to do this well is expensive - a light source that can transmit a single frequency of light so it can measure the response at the sensel level (each red or green or blue pixel on the sensor).

But there are other ways - diffraction grating can create a rainbow, or colored filters that have a known characteristic that only allow through certain frequencies of light. Wratten "color separation" filters in particular.

I know the prevailing consensus is that that RAW files are relatively unbiased and that camera mismatches at the CFA -> RAW conversion process can be reduced via post processing, and that indeed Adobe and other RAW conversion software attempts to do this by measuring each camera and creating various color profiles.

But I would be genuinely interested in seeing how each camera handles color separation on the CFA -> RAW level.

My limited understanding here is that - say - Nikon and Leica cameras allow the "red" sensels to pick up "blue" frequency light, and Canon cameras have cleaner separation of red and blue. I'm sure each camera manufacturer has a reason for this - taking the light that exists in the world and turning into an RGB image (JPG usually) that we can view is a complex process.

And that camera manufacturers purposefully weaken the CFA because it'll give a better high ISO noise performance - more photons getting through the CFA means less noise - but at the expense (to some extent) of color separation.

And camera manufacturers "cook" the RAW files - either to achieve the colors they want or to reduce variation between batches - since creating the CFA is a chemical process, there is some potential for variation between batches.

I know in the real world - very few people care about this, and that there isn't much we can do as the end user with this information, but exploring this could lead more people to care about it, and we can start drawing correlations between pigments in the CFA and the resulting images we get. Again - even if the prevailing wisdom is that this particular thing doesn't matter much.

Maybe with the rising popularity of monochrome only cameras we can open the discussion more about how the CFA -> RAW process happens with more interest. Until we get these measurements, perhaps the whole idea is a bit theoretical, but I suspect that if a thorough review of cameras was made, we'd find that there are real world implications to this - for, say, ETTR exposure - trying to maximize the amount of information we gather at the sensor level, and showing that there are differences between cameras & the philosophy behind color that each camera manufacturer imbues at the sensor level.

I'm sure I'll get a lot of responses that these differences are minimal and that anyone who shoots RAW can get nearly identical results from different cameras with "proper" post processing, but that's the exact sentiment I suspect could be unwound with lots of testing and measurement.

If anyone does know of someone who does this sort of measurement, I'd love to hear about it.
A serious optical measurement lab should be fully remote operated in a room filled with an inert gas to 1 atmosphere of pressure.

You're saying someone should measure this and decide the fate of whole corporations...

Unless this reviewer owns such a lab, they really can't be trusted to make those kinds of pronouncements.
 
Last edited:
You make a lot of assumptions, many either wrong or questionable in that long post. You seem to be under the wrong impression that the CFA must somehow separate the three colors, whatever that means. It is not what it does. Frequency overlapping is part of the game; your eyes are doing it right now. Then you say that with the right pp, all CFAs will produce the same results. Well, they do not. Either decades of research have failed to do what should have been easy or it is not true.
 
You just haven't found the right things to read yet...

Measuring the sensor isn't such a hidden thing, it's done all the time with the right equipment. What you're after is the "spectral sensitivity function" data, a list of measurements in each of the three so-called "channels" for the wavelengths from about 380nm to 730nm, the human visual range.

For a given camera, the standard way to do it is to collect a series of raw files exposed to narrow-band light at a sequence of frequencies usually spaced 5 or 10nm apart. The instrument delivering the light is called a monochromator, using either a prism or a diffraction grating to split a broadband light source and passing the rainbow through a slit to pick out the selected frequency. Expensive they are, well north of $1000US even from surplus sources. You also need a spectrometer to measure the light en ergy to correct the inevitable bias, and a means to get the light to the sensor, usually something like an integrating sphere or a fiber optic jig. Ideally, the light is presented directly to the sensor without the influence of a lens.

I wanted to measure my three cameras, but didn't want to construct an optical lab to do it. After doing a lot of research on various projects, I settled on constructing a spectroscope device using a tungsten-halogen lamp, a diffraction grating, and a collimating slit (cardboard, really sophisticated...) all mounted in a wooden box. With that, take a single image of the rainbow and separate the frequencies with some software I wrote. You can read about it here:

https://discuss.pixls.us/t/the-quest-for-good-color-4-the-diffraction-grating-shootout/19984

It's a four-part series, and this link is to the last installment, which has links to the previous three and also contains the specifics of the box I built.

Now, to your specific question, there is a tool that does "color separation" evaluation. It's called dcamprof, the command line precursor to the Lumariver camera profile software. It does a lot of camera profiling things, but the operation of interest here is 'test-profile' which has a mode that uses a camera's SSF data to generate a heat map of the camera's separation performance. Read about it here:

http://rawtherapee.com/mirror/dcamprof/dcamprof.html#ssf_csep

dcamprof also will make camera profiles from spectral sensitivity data, where such profile can incorporate a LUT for the color transform. Such a profile handles the transform of extreme colors to the rendition gamuts far better than a matrix profile. You can also make profiles for different color temperatures from that single dataset, no pesky target shot shooting. And, you can use specialized training data to make the profile more responsive in particular colors, e.g., skin tones using something like the Lippman 2000 skin tone dataset for training the LUT.

You had to ask... :D
 
Specifically, they don't measure how a frequency of light gets translated into the RAW file.

I know the equipment needed to do this well is expensive
It's not. Less than the price of a high-end FF camera.

A answer to your question would be the same as to this one: why even the simple studio test shots are bothched.

--
http://www.libraw.org/
 
Last edited:
One of the things I disliked about the "if you calibrate your camera, the differences between manufacturers disappears" argument is that a color checker is just a few semi random dots in a huge color space
Camera calibration isn't based on profiles. Camera calibration isn't based on ColorChecker. I don't know of any cameramaking company that calibrates cameras using any sort of colour target. It's much easier, too, to ensure stability while measuring spectral response directly than ensure stability in a studio setting, so many variables to watch for in a studio.
 
It's always weird to me when camera reviewers say "we can't report on the sensor because Adobe hasn't included this camera yet" and then say "but it's the same sensor that's in x camera".

But all they end up measuring is dynamic range and high ISO noise performance.

Specifically, they don't measure how a frequency of light gets translated into the RAW file.

I know the equipment needed to do this well is expensive - a light source that can transmit a single frequency of light so it can measure the response at the sensel level (each red or green or blue pixel on the sensor).
What they need is a

3e3d21aed85342f4a15b328e9973ddbd.jpg

But there are other ways - diffraction grating can create a rainbow, or colored filters that have a known characteristic that only allow through certain frequencies of light. Wratten "color separation" filters in particular.

I know the prevailing consensus is that that RAW files are relatively unbiased and that camera mismatches at the CFA -> RAW conversion process can be reduced via post processing, and that indeed Adobe and other RAW conversion software attempts to do this by measuring each camera and creating various color profiles.

But I would be genuinely interested in seeing how each camera handles color separation on the CFA -> RAW level.

My limited understanding here is that - say - Nikon and Leica cameras allow the "red" sensels to pick up "blue" frequency light,
Same as human eyes, I believe.
and Canon cameras have cleaner separation of red and blue. I'm sure each camera manufacturer has a reason for this - taking the light that exists in the world and turning into an RGB image (JPG usually) that we can view is a complex process.
Sorry about the meme. Couldn't help it 😀
 
Last edited:

Keyboard shortcuts

Back
Top