Printing: JPG vs. TIFF

JL,

I started a new thread based on an expanded comparison. The
results bear out what I did here. Someone who has very little
knowledge of photography was able to look at the 3 photographs and
picked them in order of "looking better". The order was: TIFF
16-bit, JPG q.12, and finally JPG q.10.
Yes, I saw that. Did I reply there yet? If not, I will. I was going to point out that including TIFF8, and JPEG converted back to TIFF 16 before printing, would help narrow down where the difference is coming from.
I'm actually quite interested in knowing how many people can tell a
top-quality JPEG from TIFF. Probably a few will, if the image and
print are good enough, but I'll bet it's a lot more subtle than
what CJ is seeing with his printer, which sounds like a more
obvious effect.
The differences are subtle - highlights, shadow and saturation
primarily, but they are noticeable.
See, that's why I'm confused. There shouldn't be a difference there due to JPEG. I think it's your color management or something special from Epson, which would be tested by converting the JPEG to TIFF16 before going through QImage and printing.
As for my workflow, it has now changed. I don't even bother with
JPG q.10 now. I'm doing my RAW processing and making 2 saves -
TIFF 16-bit and some JPG q.12. Why the JPG q.12? Simply because
PBase accepts only JPG files. If PBase could handle TIFF 16-bit, I
wouldn't even mess with the JPG q.12. From there I go to Qimage,
crop and print. If something really needs PS, I'll go there
reluctantly.
To use TIFF16 for pbase would be a bit absurd even if they accepted it, because it take you about 10 times as long to upload, and take each of us 10 times as long to view, but no demonstrable on-screen difference; in fact, we would all view it 8-bits per channel anyway.

You realize I hope that when pbase makes a "large" from your original, they use a severe JPEG quality like 6 or thereabouts. It's the biggest problem with viewing images there, I think. Better originals doesn't help much.
Why do I use PS as little as possible? Simply because it's too
easy to get very immersed in it, and then the time goes down the
tubes. I just don't like the time that can get frittered away
there - a personal thing, lack of self-control there I guess.
I agree. If I have a specific need, I'll use it, otherwise I stay out.

j
 
JL,

I'm always willing to learn something, but..
JL,

I started a new thread based on an expanded comparison. The
results bear out what I did here. Someone who has very little
knowledge of photography was able to look at the 3 photographs and
picked them in order of "looking better". The order was: TIFF
16-bit, JPG q.12, and finally JPG q.10.
Yes, I saw that. Did I reply there yet? If not, I will. I was
going to point out that including TIFF8, and JPEG converted back to
TIFF 16 before printing, would help narrow down where the
difference is coming from.
...this just baffles me. I don't understand it all. Why would it help to narrow down where the difference is coming from? Wouldn't converting from JPG back to TIFF 16-bit cause some loss? And what JPG quality are you referring to? Color me confused.

CJ
--
http://www.pbase.com/cjmax/galleries

If you work with your hands, head and your heart, you are an artist. Louis Nizer
 
How else is contrast generated if not by different color pixels?
S, you've never grasped that a 3D color space can be conceptualized
as having a luma dimension and two chroma dimensions, and that luma
resolution is something different, and perceptually more important
than, chroma resolution.
Nope, I don't buy that for a second. First of all, there is no
such thing as luminance resolution from a single color sensor in a
3 color primary color space. Luminance absolutely requires full
additive R+G+B performance just like chrominance, so to say that
green-only can provide luminance resolution is totally wrong. For
ecample, if you have red and blue photons in an area, green
provides no luminance information at all, so to say that nothing is
more more improtant than full color contrast is incorrect. One can
insist it's right, but it doesn't make it right. And it is very
easy to prove that the eye can distinguish full color contrasts:
The spectral sensitivity curve for an accurate luma channel is a little broader than a typical green sensor curve. Yes, you need to detect more than green to get accurate luma. But the luma detail that provides sharpness information and resolution doesn't have to be accurate, and is typically taken from just green. It's not ideal, but it does work to provide that luma detail commonly seen in prints.

Bayer cameras see that test pattern as not having any luma detail in the red/blue area. That's a deficiency of Bayer on red/blue contrast, not a general refutation of the fact that Bayer sensors get their luma detail from the green channel.
The truth is the Bayer pattern is driven by necessity, not by
preference, since there is no other possible way to make fill a 2x2
iterative grid using a 3 primary color model without doubling one
of them.
Thank you oh arbiter of truth for setting us all straight. Here we were believing what Dr. Bayer said for all these years, and now Dr. Galveston has finally debunked it!
Green might very well be the most logical color to
double, and having too much green might even provide more accuracy
in some cases, but it might provide next to nothing in other cases,
even within the same image. But in all cases, it comes with a
negative opportunity cost when compared to a design that samples
every primary color in the model at every sampling point.
Sure, that's why we like X3. But given that they can't measure all 3 colors are every location, Bayer takes advantage of perception and measures more luma resolution than chroma resolution, as everyone but you understands.
Why do you think Bayer manufacturers insist on using only B&W
resolution patterns to measure their full color cameras' resolution?
It's traditional, and it's what they're optimized for, but I'm not sure they've ever had to confront a choice to do otherwise.
These relationships have been well known
since the early days of color television. If you would take the
trouble to understand the concepts of luma and chroma and
resolution, most of your nonsense could be put behind you.
You're being taken for a ride. Sorry. Common sense prevails in
this and most things. To say its better to sample only one color
in a 3 primary model at some points than all of 3 them at all
sdampling points is obviously incorrect.
which is why nobody said that. It's not clear what you're trying to refute. I've always made it clear that I prefer X3 to Bayer, and that would go double or triple if they were compared with same number of "points".
I agree with Foveon, with
all of the 3-CCD video camera makers, and I think even Bayer mosiac
designers would agree that you've fallen for their marketing hype,
if they were being honest, and I'm sure they are honest outside of
their job pushing easy to build designs on the public.
I remain puzzled about what hype I've fallen for or where they've been dishonest. I think it's you falling for your own hogwash. Most of us are pretty honest and objective about the relative advantages of different approaches at different pixel counts, etc. Your insistence that Bayer inflates their numbers by a factor of 4 for marketing purposes is just silly.

j
 
I started a new thread based on an expanded comparison. The
results bear out what I did here. Someone who has very little
knowledge of photography was able to look at the 3 photographs and
picked them in order of "looking better". The order was: TIFF
16-bit, JPG q.12, and finally JPG q.10.
Yes, I saw that. Did I reply there yet? If not, I will. I was
going to point out that including TIFF8, and JPEG converted back to
TIFF 16 before printing, would help narrow down where the
difference is coming from.
...this just baffles me. I don't understand it all.
It seems I have that effect on people sometimes.
Why would it
help to narrow down where the difference is coming from?
Well, it wouldn't help you, because you have a solution you're happy with. Some of us like to resolve cognitive dissonance, and since what we thought we knew about top-quality JPEG and 8 versus 16 bits has been called into question, we're looking for ways to understand what's behind your results.
Wouldn't
converting from JPG back to TIFF 16-bit cause some loss? And what
JPG quality are you referring to? Color me confused.
Converting from JPEG back to TIFF 16 should cause no degradation. It just allows you to duplicate your printing path with a TIFF 16 that has the JPEG degradations in it, to see if there is still a difference in the print, which would reveal whether the difference you see is due to your 16-bit color managment path or something, as oppposed to the presume JPEG degradation.

Comparing TIFF 8 would also be key. Does it look like the TIFF 16? Or like the JPEG? Or somewhere in between? Converting TIFF 8 back to TIFF 16 and printing would be another useful step if the TIFF 8 is distinguishable from the TIFF 16.

I would do this only for JPEG 12 until the issue is settled, since the difference from JPEG 12 to JPEG 10 is clearly just some JPEG quantization noise that causes a little loss or sharpness and possibly false detail, which is likely to be visible to careful observers on some images. What I really want to know is what's behind things like saturation differences, which are not explainable by JPEG properties as I know them, especially given the failure to detect any effect in other controlled tests.

In summary, 3 quality levels: TIFF 16, TIFF 8, JPEG 12, all straight out of SPP; printing two ways: straight, versus converted to TIFF 16 first; should tell us a lot.

j
 
SigmaSD9 wrote:
The spectral sensitivity curve for an accurate luma channel is a
little broader than a typical green sensor curve. Yes, you need to
detect more than green to get accurate luma. But the luma detail
that provides sharpness information and resolution doesn't have to
be accurate, and is typically taken from just green. It's not
ideal, but it does work to provide that luma detail commonly seen
in prints.
So how do you explain this:

1Ds:
http://www.outbackphoto.com/artofraw/raw_05/crop_1ds_0000_1328_C1.jpg

SD9:
http://www.outbackphoto.com/artofraw/raw_05/crop2_sd9_0000_00200.jpg

Clearly there is a humanly noticable resolution difference, and only red and blue sensors are doing the resolving, so all of the is bunk. I'm not blaming you for the wrong info, it's all about selling on a huge scale.
Bayer cameras see that test pattern as not having any luma detail
in the red/blue area. That's a deficiency of Bayer on red/blue
contrast, not a general refutation of the fact that Bayer sensors
get their luma detail from the green channel.
In the case above, the green channel is essentially flat lined. Yet, there is resolution to be had, and people can see it.
The truth is the Bayer pattern is driven by necessity, not by
preference, since there is no other possible way to make fill a 2x2
iterative grid using a 3 primary color model without doubling one
of them.
Thank you oh arbiter of truth for setting us all straight. Here we
were believing what Dr. Bayer said for all these years, and now Dr.
Galveston has finally debunked it!
Me and Prof Mead, who kills Bayer on crudentials, and every engineer who ever believed in a 3-CCD/CMOS/prism design over using a monochrome sensor.

The greater point is, if you can see a difference in the red and blue charts above, then the Bayer clearly deprivied the human eye of resolution it could've resolved. There is no ambiguity about that, it is right there to witness first hand--and that 1Ds Bayer image actually employed about 7% more sensors overall.
Sure, that's why we like X3. But given that they can't measure all
3 colors are every location, Bayer takes advantage of perception
and measures more luma resolution than chroma resolution, as
everyone but you understands.
I understand it well, thanks, and thus Bayer has made the best of a bad situation. That is very different from being optimal as people are led to believe for the purpose of dollar extraction. An imbalance in a zero sum game is a drawback, but as you agree, they have no choice
Why do you think Bayer manufacturers insist on using only B&W
resolution patterns to measure their full color cameras' resolution?
It's traditional, and it's what they're optimized for, but I'm not
sure they've ever had to confront a choice to do otherwise.
Foveon distributed the color chart chart linked above with their SD9 samples. B&W charts were traditionally used for measuring lens resolution, when the sensor was controlled using the same film. But full color resolution charts have traditionally been used to evaluate color sensor detail, for example when evaluating color space borne imaging. The insistance on using only B&W patterns to test full color imagers is a phenomenon limited to those who would like to see monochrome sensors do particularly well.
which is why nobody said that. It's not clear what you're trying
to refute. I've always made it clear that I prefer X3 to Bayer,
and that would go double or triple if they were compared with same
number of "points".
You don't need that many points to debunk the double-green myth (more like "legendary sales pitch"). The old SD9 has a lot more green sensors in absolute terms than the 6MP Bayers. Yet in a B&W test, they perform about the same.

Why? Because the lines are black, and black isn't a color so much as a lack of light, therefore any and all partial-spectrum sensors get the "color" special-case accurate--no reading, therefore a Bayer color imbalance doesn't hurt a bit. It isn't that green sites help more when sensing black lines, it's that no green (or red, or blue) is involved--every sensor reads it the same. The Foveon senses full color at every point, like an idiot in the special case of no light.

Yes, there is white too, so it's not actually the best possible case for Bayer. That would be lens cap on, which is the only time 100% of the recorded pixels in a Bayer image are "color accurate."
I agree with Foveon, with
all of the 3-CCD video camera makers, and I think even Bayer mosiac
designers would agree that you've fallen for their marketing hype,
if they were being honest, and I'm sure they are honest outside of
their job pushing easy to build designs on the public.
I remain puzzled about what hype I've fallen for or where they've
been dishonest. I think it's you falling for your own hogwash.
Me, and Foveon, and all the 3-CCD, 3-CMOS, and spectrum splitters. Oh and Nasa too, they use 3 sensor balanced color designs on their most advanced stuff.
Most of us are pretty honest and objective about the relative
advantages of different approaches at different pixel counts, etc.
Your insistence that Bayer inflates their numbers by a factor of 4
for marketing purposes is just silly.
Surely you don't believe that a monochrome sensor is capable of resolving full color imagery with the same resolution that it can sense B&W. Do you?
 
Don't forget that all these images are 8-bit, including the 16-bit samples. The screen is only 24-bit color (8-bits/channel in TIF/JPEG lingo), and so do home computer printer drivers. Even 32-bit color modes have 24-bit palettes.

In theory, there is no difference between an 8 and 16 bit at this level, essentially every 4096 palette possibilies are merged to single colors. 16-bit color processing is more accurate to arrive at the colors ulitmately used, but the same image portrayed in 16 or 8-bit should look no different on home media. I suppose different drivers could generate small differences as choices are made and colors are merged, but that could be for better or for worse.
 
SigmaSD9 wrote:
The spectral sensitivity curve for an accurate luma channel is a
little broader than a typical green sensor curve. Yes, you need to
detect more than green to get accurate luma. But the luma detail
that provides sharpness information and resolution doesn't have to
be accurate, and is typically taken from just green. It's not
ideal, but it does work to provide that luma detail commonly seen
in prints.
So how do you explain this:
1Ds:
http://www.outbackphoto.com/artofraw/raw_05/crop_1ds_0000_1328_C1.jpg
SD9:
http://www.outbackphoto.com/artofraw/raw_05/crop2_sd9_0000_00200.jpg
I believe that the relative resolutions of red and blue in those photos is consistent with all that's been said by me already. I'm sorry you have trouble understanding me, but I don't know how else I can try to put it that will help.
Clearly there is a humanly noticable resolution difference, and
only red and blue sensors are doing the resolving, so all of the is
bunk. I'm not blaming you for the wrong info, it's all about
selling on a huge scale.
Feel free to blame me, but I don't see the problem. Yes there is a humanly noticeable resolution difference, which is made worse by the relative lack of luma contrast in the original.
Bayer cameras see that test pattern as not having any luma detail
in the red/blue area. That's a deficiency of Bayer on red/blue
contrast, not a general refutation of the fact that Bayer sensors
get their luma detail from the green channel.
In the case above, the green channel is essentially flat lined.
Yet, there is resolution to be had, and people can see it.
Yes, chroma resolution mostly, but little or no luma contrast to help.
The truth is the Bayer pattern is driven by necessity, not by
preference, since there is no other possible way to make fill a 2x2
iterative grid using a 3 primary color model without doubling one
of them.
Thank you oh arbiter of truth for setting us all straight. Here we
were believing what Dr. Bayer said for all these years, and now Dr.
Galveston has finally debunked it!
Me and Prof Mead, who kills Bayer on crudentials, and every
engineer who ever believed in a 3-CCD/CMOS/prism design over using
a monochrome sensor.
Speaking as an engineer who believes in 3-CCD etc I would have to disagree. There are good and valid reasons why Bayer's pattern has prevailed within the domain of single-chip single-layer color sensors, and it has a lot to do with luma/chroma separation.

I've read most of what Dr. Mead has written, and I must have missed where he agreed with your whacky point of view.
The greater point is, if you can see a difference in the red and
blue charts above, then the Bayer clearly deprivied the human eye
of resolution it could've resolved. There is no ambiguity about
that, it is right there to witness first hand--and that 1Ds Bayer
image actually employed about 7% more sensors overall.
Yes, I agree, the Bayer sensor loses some, especially in fine red/blue contrast.

Please don't use my agreement on this point to claim that you're right or that I agree with you, as you so often try to do.
Sure, that's why we like X3. But given that they can't measure all
3 colors are every location, Bayer takes advantage of perception
and measures more luma resolution than chroma resolution, as
everyone but you understands.
I understand it well, thanks, and thus Bayer has made the best of a
bad situation. That is very different from being optimal as people
are led to believe for the purpose of dollar extraction. An
imbalance in a zero sum game is a drawback, but as you agree, they
have no choice
There you go saying I agree, while spouting nonsense about a "zero sum game" again. It should be clear to anyone who understands it that higher sampling of green to get higher-res luma relative to chroma is a relative advantage for the Bayer approach, given that they would need to reduce luma resolution to get equal numbers of Red, Green, and Blue.

... much removal of nonsense ...
Surely you don't believe that a monochrome sensor is capable of
resolving full color imagery with the same resolution that it can
sense B&W. Do you?
Depending on how you meant the question, the answer is obviously yes or obviously no. Yes a monochrome sensor is capable of just as much resolution when looking at a color scene as when looking at a black-and-white test chart. No, a monochrome sensor is not capable of resolving color into color images, unless you make it into a color sensor by adding a filter mosaic, in which case the resolution is necessarily reduced; note that the number of megapixels is not reduced, just the resolution.

What is your point in trying to suggest that maybe I believe something stupid that you made up?

j
 
I believe that the relative resolutions of red and blue in those
photos is consistent with all that's been said by me already. I'm
sorry you have trouble understanding me, but I don't know how else
I can try to put it that will help.
You can try to explain why we can see a difference when the Bayer can't.
Feel free to blame me, but I don't see the problem. Yes there is a
humanly noticeable resolution difference, which is made worse by
the relative lack of luma contrast in the original.
Yet, we can resolve proportionately more lines than the Bayer.
In the case above, the green channel is essentially flat lined.
Yet, there is resolution to be had, and people can see it.
Yes, chroma resolution mostly, but little or no luma contrast to help.
Again, we can see more than the Bayer is capable of showing in that area, given a balanced sensor.
The truth is the Bayer pattern is driven by necessity, not by
preference, since there is no other possible way to make fill a 2x2
iterative grid using a 3 primary color model without doubling one
of them.
Thank you oh arbiter of truth for setting us all straight. Here we
were believing what Dr. Bayer said for all these years, and now Dr.
Galveston has finally debunked it!
Me and Prof Mead, who kills Bayer on crudentials, and every
engineer who ever believed in a 3-CCD/CMOS/prism design over using
a monochrome sensor.
Speaking as an engineer who believes in 3-CCD etc I would have to
disagree. There are good and valid reasons why Bayer's pattern has
prevailed within the domain of single-chip single-layer color
sensors, and it has a lot to do with luma/chroma separation.
There is no choice but to double a primary in a single layer design that will be iteratively demosiaced (i.e. cheap). Green is the probably the best one to double given you have to degrade the sensor, which is why Bayer patterns previal in low end single layer/chip sensors.
I've read most of what Dr. Mead has written, and I must have missed
where he agreed with your whacky point of view.
Allow me to enlighten you:

"The more I learned about human vision," he says, "the more it was clear that what these mosaic sensors were doing was introducing artifacts into the image. It was one of those things that becomes so massively annoying that after a while you think you ought to go do something about it. It was clear that the way image sensors worked was brain-dead. I talked to a lot of people, and nobody got it. So I finally said: 'That's not a problem. That's an opportunity.'"

"It's easy to have a complicated idea,it's very, very hard to have a simple idea."

"It's a hack [discussing Bayer mosiac sensors], they have to do all this guesswork to figure out what they threw away. They end up with a lot of data, but two-thirds of it is made up. We end up with the same amount of data, except ours is real."

-- Carver Mead
There you go saying I agree, while spouting nonsense about a "zero
sum game" again.
Surely you agree that the sampling rates on a sensors surface are zero sum. What you gain in green results in a loss in two other channels.
It should be clear to anyone who understands it
that higher sampling of green to get higher-res luma relative to
chroma is a relative advantage for the Bayer approach, given that
they would need to reduce luma resolution to get equal numbers of
Red, Green, and Blue.
Then why doesn't a 3.43MP Foveon beat the bejeezus out of a 6MP Bayer in a B&W test? The Foveon samples green at a much higher rate.
... much removal of nonsense ...
Surely you don't believe that a monochrome sensor is capable of
resolving full color imagery with the same resolution that it can
sense B&W. Do you?
Depending on how you meant the question, the answer is obviously
yes or obviously no. Yes a monochrome sensor is capable of just as
much resolution when looking at a color scene as when looking at a
black-and-white test chart. No, a monochrome sensor is not capable
of resolving color into color images, unless you make it into a
color sensor by adding a filter mosaic, in which case the
resolution is necessarily reduced; note that the number of
megapixels is not reduced, just the resolution.
It depends if you want a color pixel or a monochrome pixel. Full color MPs are reduced by a factor of 3 to 4--depending on how much weight you give to a pure monochrome sensor's ability to resolve full color with no mosiac, which is akin to the left over green (or whatever color you chose to double out of necessity).

Simply put, it takes 10.2M sensors to produce a 3.4MP image, not one less.
 
Just Looking wrote:
You can try to explain why we can see a difference when the Bayer
can't.
I'm tired of trying.
I've read most of what Dr. Mead has written, and I must have missed
where he agreed with your whacky point of view.
Allow me to enlighten you:

"The more I learned about human vision," he says, "the more it was
clear that what these mosaic sensors were doing was introducing
artifacts into the image. It was one of those things that becomes
so massively annoying that after a while you think you ought to go
do something about it. It was clear that the way image sensors
worked was brain-dead. I talked to a lot of people, and nobody got
it. So I finally said: 'That's not a problem. That's an
opportunity.'"

"It's easy to have a complicated idea,it's very, very hard to have
a simple idea."

"It's a hack [discussing Bayer mosiac sensors], they have to do all
this guesswork to figure out what they threw away. They end up with
a lot of data, but two-thirds of it is made up. We end up with the
same amount of data, except ours is real."

-- Carver Mead
Yes, I've read those, and I pretty much agree with his viewpoint. But nowhere does he support your notion that the Bayer sensor gets less luma resolution that would be expected based on the number of green sensors.
There you go saying I agree, while spouting nonsense about a "zero
sum game" again.
Surely you agree that the sampling rates on a sensors surface are
zero sum. What you gain in green results in a loss in two other
channels.
Yes I agree that for a total number of pixels, red + green + blue as they're always counted, using more of them for green means you have fewer for red and blue.

I do not agree that using 1/3 for each color would be the only possible optimum. For single-chip single-layer color sensors, using 50% for green is better, as is well known and understood by most technical practitioners of digital photography.
It should be clear to anyone who understands it
that higher sampling of green to get higher-res luma relative to
chroma is a relative advantage for the Bayer approach, given that
they would need to reduce luma resolution to get equal numbers of
Red, Green, and Blue.
Then why doesn't a 3.43MP Foveon beat the bejeezus out of a 6MP
Bayer in a B&W test? The Foveon samples green at a much higher
rate.
The 3.43 million green sensors of the Foveon compared to the 3 Million of the 6 MP Bayer (or 3.05 to 3.15 million for typical actual cameras near 6 MP) is not enough higher to beat the Jeebus out of it. The rate is only a few percent higher, so it only wins in luma resolution by a little, or not at all, depending on how you test it. It wins in chroma resolution by a lot more, and that's the tradeoff...
... much removal of nonsense ...
Surely you don't believe that a monochrome sensor is capable of
resolving full color imagery with the same resolution that it can
sense B&W. Do you?
Depending on how you meant the question, the answer is obviously
yes or obviously no. Yes a monochrome sensor is capable of just as
much resolution when looking at a color scene as when looking at a
black-and-white test chart. No, a monochrome sensor is not capable
of resolving color into color images, unless you make it into a
color sensor by adding a filter mosaic, in which case the
resolution is necessarily reduced; note that the number of
megapixels is not reduced, just the resolution.
It depends if you want a color pixel or a monochrome pixel. Full
color MPs are reduced by a factor of 3 to 4--depending on how much
weight you give to a pure monochrome sensor's ability to resolve
full color with no mosiac, which is akin to the left over green (or
whatever color you chose to double out of necessity).
You're really confusing things when you mix megapixels with resolution, and start applying fudge factors to numbers that are intended as simple counts. MPs are not reduce by color filter mosaics. What's reduced is resolution, and if done well the luma resolution won't be reduced as much as the chroma resolution.

This confusion of resolution with megapixels is widespread, not unique to you, but you've had it explained enough times that by now you should be able to see why it's a confusion that should be avoided, not propagated.
Simply put, it takes 10.2M sensors to produce a 3.4MP image, not
one less.
3 MP cameras do it with a whole lot fewer pixels. Not nearly as good images, I'll grant.

j
 
Don't forget that all these images are 8-bit, including the 16-bit
samples. The screen is only 24-bit color (8-bits/channel in
TIF/JPEG lingo), and so do home computer printer drivers. Even
32-bit color modes have 24-bit palettes.
That is one possibility. Your statement that printer drivers are 8-bits/channel is a hypothesis, and not a bad one, but one that we can test by the proposed experiment.
In theory, there is no difference between an 8 and 16 bit at this
level, essentially every 4096 palette possibilies are merged to
single colors.
OK, that theory has a little unexplained numerology in it, but I'll grant you that for some sufficiently large number in place of your 4096, 16-bit data converted throught an 8-bit driver would look just like 8-bit data.

But what if there's a 16-bit-capable color management engine that can produce better data in 8b printer space? Isn't that an alternative sensible conjecture? I have no idea which is true.
16-bit color processing is more accurate to arrive
at the colors ulitmately used, but the same image portrayed in 16
or 8-bit should look no different on home media. I suppose
different drivers could generate small differences as choices are
made and colors are merged, but that could be for better or for
worse.
Yes, it would seem to be about like the small differences you would expect by going through a high-quality JPEG encode/decode. Small, but possibly visible under certain circumstances. It's hard to say if such differences are actually visible until we control for things like different color management paths, though.

j
 
Don't forget that all these images are 8-bit, including the 16-bit
samples. The screen is only 24-bit color (8-bits/channel in
TIF/JPEG lingo), and so do home computer printer drivers. Even
32-bit color modes have 24-bit palettes.
That is one possibility. Your statement that printer drivers are
8-bits/channel is a hypothesis, and not a bad one, but one that we
can test by the proposed experiment.
In theory, there is no difference between an 8 and 16 bit at this
level, essentially every 4096 palette possibilies are merged to
single colors.
OK, that theory has a little unexplained numerology in it, but I'll
grant you that for some sufficiently large number in place of your
4096, 16-bit data converted throught an 8-bit driver would look
just like 8-bit data.
Right. Sorry. The unexplained part is problaby the 4096:1 ratio. That is because even though a 48-bit color format is used (16-bit TIF, now 2 bytes per channel vs 1 bytes per channel in the 8-bit variety), the camera's color data is realy only 36-bit color, 12-bits per channel.

So it is actually 36-bit to 24-bit color that results in the 4096:1 palette merging, not 48-bit to 24-bit of the formats proper.
But what if there's a 16-bit-capable color management engine that
can produce better data in 8b printer space? Isn't that an
alternative sensible conjecture? I have no idea which is true.
16-bit color processing is more accurate to arrive
at the colors ulitmately used, but the same image portrayed in 16
or 8-bit should look no different on home media. I suppose
different drivers could generate small differences as choices are
made and colors are merged, but that could be for better or for
worse.
Yes, it would seem to be about like the small differences you would
expect by going through a high-quality JPEG encode/decode. Small,
but possibly visible under certain circumstances. It's hard to say
if such differences are actually visible until we control for
things like different color management paths, though.
Right. Which leads us to the biggest point about all of this, which is (get ready): that the small, though I maintain noticable, difference in saving any given image in TIF vs JPEG, is not a sound basis for accepting out of camera JPEGs as only being "that" much different.

Why? Because if you allow the camera to discard 4095/4096ths of its original color palette, you can't get "there" if you choose to manipulate the image. Plus, in that event, there is no such thing as JPEG, only a double-JPEG, best case.

In the end, the most efficient and best way to go remains 36-bit RAW. 16-bit TIFs are needlessly copious, while 8-bit formats have already truncated 99.9% of the color palette. Even an 8-bit TIF is not a "lossless format" in that sense, it is big time lossy.
 
Don't forget that all these images are 8-bit, including the 16-bit
samples. The screen is only 24-bit color (8-bits/channel in
TIF/JPEG lingo), and so do home computer printer drivers. Even
32-bit color modes have 24-bit palettes.
That is one possibility. Your statement that printer drivers are
8-bits/channel is a hypothesis, and not a bad one, but one that we
can test by the proposed experiment.
In theory, there is no difference between an 8 and 16 bit at this
level, essentially every 4096 palette possibilies are merged to
single colors.
OK, that theory has a little unexplained numerology in it, but I'll
grant you that for some sufficiently large number in place of your
4096, 16-bit data converted throught an 8-bit driver would look
just like 8-bit data.
Right. Sorry. The unexplained part is problaby the 4096:1 ratio.
That is because even though a 48-bit color format is used (16-bit
TIF, now 2 bytes per channel vs 1 bytes per channel in the 8-bit
variety), the camera's color data is realy only 36-bit color,
12-bits per channel.

So it is actually 36-bit to 24-bit color that results in the 4096:1
palette merging, not 48-bit to 24-bit of the formats proper.
But what if there's a 16-bit-capable color management engine that
can produce better data in 8b printer space? Isn't that an
alternative sensible conjecture? I have no idea which is true.
16-bit color processing is more accurate to arrive
at the colors ulitmately used, but the same image portrayed in 16
or 8-bit should look no different on home media. I suppose
different drivers could generate small differences as choices are
made and colors are merged, but that could be for better or for
worse.
Yes, it would seem to be about like the small differences you would
expect by going through a high-quality JPEG encode/decode. Small,
but possibly visible under certain circumstances. It's hard to say
if such differences are actually visible until we control for
things like different color management paths, though.
Right. Which leads us to the biggest point about all of this,
which is (get ready): that the small, though I maintain noticable,
difference in saving any given image in TIF vs JPEG, is not a sound
basis for accepting out of camera JPEGs as only being "that" much
different.

Why? Because if you allow the camera to discard 4095/4096ths of
its original color palette, you can't get "there" if you choose to
manipulate the image. Plus, in that event, there is no such thing
as JPEG, only a double-JPEG, best case.

In the end, the most efficient and best way to go remains 36-bit
RAW. 16-bit TIFs are needlessly copious, while 8-bit formats have
already truncated 99.9% of the color palette. Even an 8-bit TIF is
not a "lossless format" in that sense, it is big time lossy.
By some freak accident, I agree with everything you just said. You've made it very clear why RAW is much better than JPEG or TIFF as a camera format.

But what did that have to do with the issues at hand, of whether TIFF has an advantage over JPEG, or TIFF 16 has an advantage over TIFF 8, for printing an already-adjusted image? My conjecture: nothing at all.

j
 
Just Looking wrote:
You can try to explain why we can see a difference when the Bayer
can't.
I'm tired of trying.
And I'm tired of viewing it. Fact is, there is plenty of resolving that could be done in the red and blue channels if the sensor is balance, and the 10.3MP Foveon vs 11MP 1Ds proves that, the SD9 clobber's the 1Ds in red/blue resolution, and it does so in a way humans can plainly see.
I've read most of what Dr. Mead has written, and I must have
missed
where he agreed with your whacky point of view.
Allow me to enlighten you:

"The more I learned about human vision," he says, "the more it was
clear that what these mosaic sensors were doing was introducing
artifacts into the image. It was one of those things that becomes
so massively annoying that after a while you think you ought to go
do something about it. It was clear that the way image sensors
worked was brain-dead. I talked to a lot of people, and nobody got
it. So I finally said: 'That's not a problem. That's an
opportunity.'"

"It's easy to have a complicated idea,it's very, very hard to have
a simple idea."

"It's a hack [discussing Bayer mosiac sensors], they have to do all
this guesswork to figure out what they threw away. They end up with
a lot of data, but two-thirds of it is made up. We end up with the
same amount of data, except ours is real."

-- Carver Mead
Yes, I've read those, and I pretty much agree with his viewpoint.
You mean his "whacky" view point, don't you?
But nowhere does he support your notion that the Bayer sensor gets
less luma resolution that would be expected based on the number of
green sensors.
Green alone is not "luminance" in a color image. Luminance requires the R+G+B channels, just like chrominance. Green sensors might provide some partial luminance info in some areas of some color images, or they might not provide anything much at all. Same goes for the R and B components of luminance, which are also not luminance.
There you go saying I agree, while spouting nonsense about a "zero
sum game" again.
Surely you agree that the sampling rates on a sensors surface are
zero sum. What you gain in green results in a loss in two other
channels.
Yes I agree that for a total number of pixels, red + green + blue
as they're always counted, using more of them for green means you
have fewer for red and blue.
Ok great, it wasn't "nonsense" afterall.
I do not agree that using 1/3 for each color would be the only
possible optimum. For single-chip single-layer color sensors,
using 50% for green is better, as is well known and understood by
most technical practitioners of digital photography.
For single chip sensors (which choose to employ iterative demosiacing to save money) no one is saying the Bayer pattern isn't the best one. That is a far cry from saying it is the better than a balanced three exposure/CCD/CMOS/layer/prism/whatever approach.
Then why doesn't a 3.43MP Foveon beat the bejeezus out of a 6MP
Bayer in a B&W test? The Foveon samples green at a much higher
rate.
The 3.43 million green sensors of the Foveon compared to the 3
Million of the 6 MP Bayer (or 3.05 to 3.15 million for typical
actual cameras near 6 MP) is not enough higher to beat the Jeebus
out of it. The rate is only a few percent higher, so it only wins
in luma resolution by a little, or not at all, depending on how you
test it. It wins in chroma resolution by a lot more, and that's
the tradeoff...
So a little win plus a big win = ?

The point is, it doesn't win, even with lots more green (120%) and tons more blue (230%) and red (230%).

Why?

Because the test pattern is B&W, which tests neither luminance nor chrominance resolution, both those characteristics of pixels require knowing all three RGB components. It tests the special case of B&W resolution, meaning that all the monochrome sensors contribute fully at every sampled location.

The test is meaningless, unless you are comparing like designs, in which case it is only pointless.
It depends if you want a color pixel or a monochrome pixel. Full
color MPs are reduced by a factor of 3 to 4--depending on how much
weight you give to a pure monochrome sensor's ability to resolve
full color with no mosiac, which is akin to the left over green (or
whatever color you chose to double out of necessity).
You're really confusing things when you mix megapixels with
resolution,
If true, that's only because Bayer MP claims are based on monchrome performance, not color performance. Unfortunate, to be sure.
This confusion of resolution with megapixels is widespread, not
unique to you, but you've had it explained enough times that by now
you should be able to see why it's a confusion that should be
avoided, not propagated.
Simply put, it takes 10.2M sensors to produce a 3.4MP image, not
one less.
3 MP cameras do it with a whole lot fewer pixels. Not nearly as
good images, I'll grant.
What they do is interpolate fewer MPs up to 3M recorded pixels. In a 3 primary color model, each pixel is defined as requiring three color components. That means a 3.4MP image can only be generated by 10.2M sensors, not one less, or those 3.4M pixels have been interpolatively upscaled to some degree.

I hope we both can agree that digital upscaling and quantifying optical resolution are two things that have nothing in common?
 
Don't forget that all these images are 8-bit, including the 16-bit
samples. The screen is only 24-bit color (8-bits/channel in
TIF/JPEG lingo), and so do home computer printer drivers. Even
32-bit color modes have 24-bit palettes.
That is one possibility. Your statement that printer drivers are
8-bits/channel is a hypothesis, and not a bad one, but one that we
can test by the proposed experiment.
In theory, there is no difference between an 8 and 16 bit at this
level, essentially every 4096 palette possibilies are merged to
single colors.
OK, that theory has a little unexplained numerology in it, but I'll
grant you that for some sufficiently large number in place of your
4096, 16-bit data converted throught an 8-bit driver would look
just like 8-bit data.
Right. Sorry. The unexplained part is problaby the 4096:1 ratio.
That is because even though a 48-bit color format is used (16-bit
TIF, now 2 bytes per channel vs 1 bytes per channel in the 8-bit
variety), the camera's color data is realy only 36-bit color,
12-bits per channel.

So it is actually 36-bit to 24-bit color that results in the 4096:1
palette merging, not 48-bit to 24-bit of the formats proper.
But what if there's a 16-bit-capable color management engine that
can produce better data in 8b printer space? Isn't that an
alternative sensible conjecture? I have no idea which is true.
16-bit color processing is more accurate to arrive
at the colors ulitmately used, but the same image portrayed in 16
or 8-bit should look no different on home media. I suppose
different drivers could generate small differences as choices are
made and colors are merged, but that could be for better or for
worse.
Yes, it would seem to be about like the small differences you would
expect by going through a high-quality JPEG encode/decode. Small,
but possibly visible under certain circumstances. It's hard to say
if such differences are actually visible until we control for
things like different color management paths, though.
Right. Which leads us to the biggest point about all of this,
which is (get ready): that the small, though I maintain noticable,
difference in saving any given image in TIF vs JPEG, is not a sound
basis for accepting out of camera JPEGs as only being "that" much
different.

Why? Because if you allow the camera to discard 4095/4096ths of
its original color palette, you can't get "there" if you choose to
manipulate the image. Plus, in that event, there is no such thing
as JPEG, only a double-JPEG, best case.

In the end, the most efficient and best way to go remains 36-bit
RAW. 16-bit TIFs are needlessly copious, while 8-bit formats have
already truncated 99.9% of the color palette. Even an 8-bit TIF is
not a "lossless format" in that sense, it is big time lossy.
By some freak accident, I agree with everything you just said.
You've made it very clear why RAW is much better than JPEG or TIFF
as a camera format.
Perish the thought.
But what did that have to do with the issues at hand, of whether
TIFF has an advantage over JPEG, or TIFF 16 has an advantage over
TIFF 8, for printing an already-adjusted image? My conjecture:
nothing at all.
Well, I think at the heart of all these discussions is the notion that out of camera JPEG isn't "that" bad because the difference when experimenting with an already finalized image is only "that" much. But out of camera JPEG is bad, and one of the main reasons is that JPEG has become a a very useful format which is pretty decent. Ironically, starting out with a JPEG largely removes the possibility of winding up with a high quality JPEG (or TIF).
 

Keyboard shortcuts

Back
Top