"Color science" vs RAW

Why do you call yourself a cricket? Is it because you are quoting me ("just as you said earlier", right above), even without understanding what it is that I wrote?
Then why resort to personal attack? I've already given you the next challenge, grasshopper...

Overcome the IQ3 trichromatic CFA sensor with measurement against a non-trichromatic CFA and the battle opens anew.

Until them sayonara!

fPrime

--
Half of my heart is a shotgun wedding to a bride with a paper ring,
And half of my heart is the part of a man who's never truly loved anything.
 
Last edited:
Why do you call yourself a cricket? Is it because you are quoting me ("just as you said earlier", right above), even without understanding what it is that I wrote?
Then why resort to personal attack?
Ha, your logic fails you each step of your way.
I've already given you the next challenge

Overcome the IQ3 trichromatic CFA sensor with measurement against a non-trichromatic CFA
I don't see your own data to challenge it. Not to mention your language is moot to the point it doesn't make any sense.

You can up-vote your own posts all you want, and invite your friends to do the same, it's expected that you don't understand that these questions are not decided by "CMOS-y" voting :)
 
Last edited:
looser CFA's for higher resolution and better high ISO performance.
You are making a mistake that was debunked 15+ years ago. "Looser" achieve(s) nothing :)
Semantics,
Learn what semantics is.
Yawn.
of course "looser" or "less strict" CFA's achieve something. Are you willing to debunk yourself?
You don't understand a word I wrote.

I was debunking the idea that "looser" gets them anywhere.
Oh I understand you all too well. "Looser" or "less strict" buys them high ISO performance and higher pixel density just as you said earlier.
Your "CMOS-y colors", on the other hand :)

I'm happy to debunk myself when I'm proven wrong or learned better.
But you are so old and set now that you refuse to learn.
You, on the other hand, never learn.
No, I have learned exactly because I am open to science and data. It doesn't hurt that my CFA viewpoint is backed up by Phase One, a camera manufacturer:

https://www.dpreview.com/forums/post/60658069

Who backs up your dated viewpoints? Crickets. Until you get hold of an IQ3 trichromatic and debunk its color fidelity you will never win this argument.

fPrime
 
Bit of a detour there....Would it be impolite to ask for a little more clarification from the experts?

What percentage of the necessary physical and mathematical color operations are baked in at the raw stage of a file's life? Is it 50 percent? 10? 90?

And are the color twists and gamma tweaks and the rest just in the form of tags? Or has actual math been performed that can't be undone?
 
looser CFA's for higher resolution and better high ISO performance.
You are making a mistake that was debunked 15+ years ago. "Looser" achieve(s) nothing :)
Why not provide us with a reference, to make yourself useful to the thread instead of eking out a series of sneers to another member? It reeks of personality disorder more than expertise.
 
Bit of a detour there....Would it be impolite to ask for a little more clarification from the experts?
My expertise is limited but I will try.
What percentage of the necessary physical and mathematical color operations are baked in at the raw stage of a file's life? Is it 50 percent? 10? 90?
The question makes no sense. You cannot measure the errors in percentage in this case.
And are the color twists and gamma tweaks and the rest just in the form of tags? Or has actual math been performed that can't be undone?
If we are talking about accurate capture relative to our color vision (vs. pleasant colors), you would want the sensor to capture the same 3D space spanned by our vision (the IL condition). Next, you want those three filters to be "well separated" and have wide enough spread, roughly speaking so that the color transformations would not amplify noise so much. Finally, there is the question of the DR - not quite what the DXO fans talk about - but DR per channel; and I guess sensitivity for the R and the B ones compared to G (but I know nothing about the engineering side).

Since no CFA/sensor does this well enough, the next question is how you optimize the sensor/CFA to get good results given the natural constraints. There is no clear answer to that because it is a matter of taste. You may want to build a metric based on the human color sensitivity to hue variations (which is quite complex, I understand). Or, you may want to define a metric based on our tolerance to hue errors - not whether we see them or not but how much we hate them. For example, we might be more tolerant to visible changes of the sky color than to skin ones. Martians, on the other hand, might be more critical to hue variations of blue than those of pink, which would be their sky color. You have to take the conversion into account - the end result has a limited bit depth, which imposes constraints (problems with reds, for example, are very common). If our monitors and the whole pipeline changes, we may want to revisit those conmtratints, etc.
 
What percentage of the necessary physical and mathematical color operations are baked in at the raw stage of a file's life?
Some companies, mostly Nikon, are performing per channel calibration. It is often called white balance preconditioning, and it is a simplified form of calibration matrix (only the diagonal) described by Adobe in DNG specification under the name of "CameraCalibration", page 34.

Nikon's calculation of coefficients is <target camera white balance coefficients for closest Preset> / <reference camera preset white balance coefficients for closest Preset>
Is it 50 percent? 10? 90?
Close to 0 percent for ordinary raw files, unless you mean spectral sensitivity of the sensor sandwich.

--
http://www.libraw.org/
 
Last edited:
Phase One is about as reliable source of information as Mr. Donald Trump.

But, if you like digging BS, please enjoy!
612c6662056f45beadb6d447809f16f7.jpg.png


:-)



--

http://therefractedlight.blogspot.com
 
What percentage of the necessary physical and mathematical color operations are baked in at the raw stage of a file's life? Is it 50 percent? 10? 90?

And are the color twists and gamma tweaks and the rest just in the form of tags? Or has actual math been performed that can't be undone?
An ideal raw file would have image data that consists of digital numbers that are simply proportional to the voltages present in the pixels with some degree of accuracy, and so all of the fun math is done by the raw processor.

From what I’ve read (and by no means am I an expert) it would seem that raw files produced by most enthusiast and pro cameras approximately follow this model. There are some shenanigans going on, like cameras that subtract out a supposed noise floor, or do noise reduction, or lossy data compression, and I’m sure that there are other questionable or laudable practices which others can mention. But none of these (as far as I can tell) stray so far away from the ideal that you can’t do classical raw color processing on them.

Suppose you had a camera that is practically ideal, and you do the bare minimum of processing: you’d still need to do dozens of arithmetic and logical operations on each and every one of the millions of pixels in an image. So even with some processing done in the raw file, that still leaves most of the arithmetic still having to be done by the raw processor.
 
I've already given you the next challenge

Overcome the IQ3 trichromatic CFA sensor with measurement against a non-trichromatic CFA
I don't see your own data to challenge it.
Why should I? Phase One have basically called all of your notions about profiling away the problems induced by weak CFA's as pure bunk. I'm aligned with Phase One's published point of view.

You contest want to contest that? You challenge it.

Until then this is the stark reality.

Team Phase One & fPrime: 1

Team PST and Iliah Borg: 0
Not to mention your language is moot to the point it doesn't make any sense.
Like this very sentence above makes any sense to anyone except you?

And still your PST buddies upvote anything you write, sigh.
You can up-vote your own posts all you want, and invite your friends to do the same, it's expected that you don't understand that these questions are not decided by "CMOS-y" voting :)
Sounds like you're just angry that whenever we cross swords on DPR these days I now routinely get more upvotes than you... this forum being the only exception. :-D

fPrime
 
Last edited:
What percentage of the necessary physical and mathematical color operations are baked in at the raw stage of a file's life? Is it 50 percent? 10? 90?

And are the color twists and gamma tweaks and the rest just in the form of tags? Or has actual math been performed that can't be undone?
An ideal raw file would have image data that consists of digital numbers that are simply proportional to the voltages present in the pixels with some degree of accuracy, and so all of the fun math is done by the raw processor.

From what I’ve read (and by no means am I an expert) it would seem that raw files produced by most enthusiast and pro cameras approximately follow this model. There are some shenanigans going on, like cameras that subtract out a supposed noise floor, or do noise reduction, or lossy data compression, and I’m sure that there are other questionable or laudable practices which others can mention. But none of these (as far as I can tell) stray so far away from the ideal that you can’t do classical raw color processing on them.

Suppose you had a camera that is practically ideal, and you do the bare minimum of processing: you’d still need to do dozens of arithmetic and logical operations on each and every one of the millions of pixels in an image. So even with some processing done in the raw file, that still leaves most of the arithmetic still having to be done by the raw processor.
 
No, I have learned exactly because I am open to science and data. It doesn't hurt that my CFA viewpoint is backed up by Phase One, a camera manufacturer:

https://www.dpreview.com/forums/post/60658069

Who backs up your dated viewpoints? Crickets. Until you get hold of an IQ3 trichromatic and debunk its color fidelity you will never win this argument.

fPrime
Hi,

Phase One is about as reliable source of information as Mr. Donald Trump.
Hi Erik,

I don't think we should take this discussion political by making comparisons to Donald Trump, the democratically elected 45th President of the United States of America. You can tell that Iliah is already doing his level best to not answer the questions put before him. Trump only takes us further off track.

That being said, I don't agree with your suggestion that Phase One isn't a reliable source of information. Please, what evidence do you have that they aren't credible?

Phase One, after all, are not exactly an industry lightweight.
  • They are an long established medium format camera manufacturer. As such they have design expertise with sensors, CFA's, and hardware integration.
  • They offer an industry leading program called C1 Pro. As such they have design expertise with RAW conversion, color profiling, and image editing.
  • They have created custom camera profiles for virtually all professional and consumer cameras. As such they have insider's expertise on how good (or bad) the color performance is from Canon, Nikon, Sony, Pentax, Fuji, and Panasonic... and how it has shifted over time.
Does the expertise or breadth of knowledge of any individual PST member approach that of Phase One? Hardly. Your personal expertise is in the nuclear power field and as a hobby you might have had experience with profiling and comparing a handful of cameras, but a hundred of them like P1? No. Iliah is a software engineer who might know how to assemble some code to study RAW metadata, but has he ever designed a CFA or built a camera with one like P1 has? No. Can you see why any rational person would have to conclude that Phase One's manufacturer explanations credibly outweigh the explanations of any one armchair prognosticator here on DPR?

If your bias against them is purely that all they say should be suspect simply because they make a trichromatic camera and promote its color advantages, then that's a weak case IMHO. They could have made their marketing case purely by comparing their trichromatic camera against their standard camera and been finished with it. But no, they took the time to show how and why CFA's for all digital cameras had been weakened over time and why custom profiling is fallacious beyond the calibrated test lighting and test colors. I have also argued the same here on DPR for the better part of six years using SMI data, empirical image evidence, and industry references.

So, like I said to Iliah, for now the battle is over and he has lost. However, I have in fairness suggested a way that he or anyone else might try to challenge Phase One's determination on the matter:

1. Measure a CFA spectrogram for the IQ3 trichromatic and show that in reality P1 doesn't actually have a trichromatic CFA installed, and/or...

2. Test the IQ3 trichromatic camera against a standard camera with a set of subjects that typically invoke metameric failure and show that in reality there's no difference in the color performance between either.

Do either or both of these successfully and I'll be more than happy to eat crow on P1's credibility. :-D

fPrime
 
Last edited:
Last edited:
I do not disagree in general with most of what you said but I want to comment on those suggestions:
1. Measure a CFA spectrogram for the IQ3 trichromatic and show that in reality P1 doesn't actually have a trichromatic CFA installed, and/or...
If somebody has it, I would be curious to see it in comparison with a popular camera, done by the same tester. We have to be careful though - eyeballing the curves may show differences (there are always some differences) but evaluating their significance is not an easy task.
2. Test the IQ3 trichromatic camera against a standard camera with a set of subjects that typically invoke metameric failure and show that in reality there's no difference in the color performance between either.
Again, one ahs to be cautious here: most randomly chosen cameras would have differences but then the visual results depend on the processing as well. Metameric failure, IMO, is everywhere, maybe subtle. My ultimate test would be a long term use (I still do not know enough about my 8 month old camera) but since this is hard to do, comparisons would be interesting to see.
Do either or both of these successfully and I'll be more than happy to eat crow on P1's credibility. :-D
 
SigZero wrote: That was also my point of view. I really doubt that there are any serious RAW data manipulations done in the camera. I know that Nikon is doing some kind of digital scaling of selected channels, but it is difficult to call it any color manipulation.
As long as the Raw manipulations are linear (as in Nikon's wb preconditioning) and image information is not clipped out of the data, then it makes zero difference to the quality of the displayed image.

Jack
 
fPrime wrote: ... 2. Test the IQ3 trichromatic camera against a standard camera with a set of subjects that typically invoke metameric failure and show that in reality there's no difference in the color performance between either.
If there is a difference in the CFAs, as is implied by your thought experiment, then it's guaranteed that there will be differences in the color performance of the cameras. The question is, which will produce the better color? And once we define the context and what better color means we might have a quantitative answer, for that particular setup.

Jack
 

Keyboard shortcuts

Back
Top