Raw vs jpeg part 2

would there be a visible difference in the final print between exposing for JPEG and exposing for RAW?
Take your camera, shoot, process, print, scan, and post. Don't expect somebody to do your homework.
What a sensible idea; actually testing something yourself (with proper steps) to make your own, educated decisions!

Having done this myself, the answer is an absolute and huge Yes to a visible difference. The homework took mere minutes to conduct. Exposure (photography) 101.
 
A basic assumption in this thread has been that one needs to absolutely maximize certain aspects of image quality.
GIGO:Garbage In Garbage Out!

Why not maximize the image data quality? You CAN see a difference!
Assume your camera is set to RAW+JPEG at ISO 100. You take two images. One is exposed with the intent of producing the best possible RAW file. The other is exposed with the intent of producing the best possible out-of-camera JPEG.
The in camera JPEG is either over exposed because you wish to ideally expose the raw or the raw is under exposed because you ideally exposed the JPEG. It's simple to test (why doesn’t everyone take the 5 minutes to do so?).
Will there be a visible difference between the resulting two images?
Yes and you can figure this out on your own by spending less time asking and actually doing the tests yourself!
Knowing how big a visible difference it makes is an important factor in knowing how important it is to properly expose the RAW, and when you should do so.
Knowing by testing before making statements is an ideal first step in answering your questions. DO try it.
 
I don't think anyone can tell the difference. I believe photography is more about the subject matter of what's happening in the picture rather than if it's technically perfect. So I think that a batch of 100 random images, each containing 50 jpegs and 50 raws all processed and resized for webview at 4K, would result in no one guessing within 10.

If raw is so completely overwhelmingly better, then why is JPEG on every camera ever made including the flagship models?
If it is not, why the RAW option in the first place?
All I hear about are personal examples, which isn't fair at all. I could easily post-process a JPEG to look superior to a raw. I could purposefully take a picture with a wide dynamic range and prove raw is superior. Real pictures aren't usually like that.
They are actually more challenging than that. Blown highlights are far more common than challenging DR scenes. Poor WB is another example.

Are you kidding? Or just trolling? I am not following this thread carefully, am I missing some context?
 
robgendreau wrote: That's probably because what's happening is that the camera shoots a raw, records it, and processes a JPEG from that RAW.
Of course what's what happens. All cameras shoot this raw file. You can decided if you want to keep it or toss it.

The JPEG always comes from the in camera processing for the initial raw data. There's no other way to produce that JPEG!

As for exposure, it's simple (ETTR and the idea of ideal exposure for raw is really, really old).
With whatever setting are there for JPEG. The camera's computer assumed you wanted that rather washed out bright shot that you set up because you were using ETTR to get the best possible RAW.
The data is simply vastly different. Treating a JPEG and a raw the same is like assuming an ISO 100 transparency film and a 400 ISO color neg should be treated the same. They should not. Anyone testing these differences (in film or digital) can see this with about 5 minutes of testing.
What I guess I'm saying is that since the image always starts out as RAW it's really a processing question, not an exposure question.
Exactly! And the data is different (one is linear encoded and needs rendering)..
I'm also a bit curious if there's a camera (or camera phone) that perhaps always shoots ETTR, even if the user is unaware of it, and never even sees the washed out image, as in a P&S with no EV controls.
Exposure Compensation anyone?

 
I also see evidence daily on these forums that most cameras are capable of producing straight out of the camera jpegs that blow away the raw processing abilities of many who choose to do it themselves after the fact.
You are living in a dream world.

Pure and simple.
A world where a lot of people don't have a clue creative idea how to render their raws perhaps, or use inferior raw processing software they have no idea how to control.
 
...

If raw is so completely overwhelmingly better, then why is JPEG on every camera ever made including the flagship models?

...
The short answer is that JPEG is an easier workflow, and many people are happy with the results.
Here we go again. People are happy who actually properly test both options or people who didn't test this, are just somewhat lazy and ignorant and are happy they are happy? Big difference.

IF someone does proper testing of the two and makes up their own workflow decisions, fine. If someone listens to other's who didn't test the differences, didn't provide any examples of the differences but tells the reader, just be happy, it's good enough should probably be ignored no? I mean without a lick of testing and examples to provide, why should we listen to what they propose? It could be made up or miss understood to be based on like so many DPR urban legends on digital imaging and color management.
There are many situations where the results from shooting JPEG are more than "good enough".
How would you know if you didn't try the other process? Oh, speculate.
The long answer:

I think you are confusing the concept of a "better tool" with "better results".
The two should be connected at the hip.
JPEG shooters are quite capable of producing excellent quality results.
Could they do better? In many cases, especially with guidelines from those who've done some testing, yes. Much better quality, more flexible image data to produce an image.
RAW shooters have more flexibility in post processing.
And that doesn't ever constitute better results (then JPEG shooters?).
This gives them a wider range of options when processing the file.
And that doesn't ever constitute better results (then JPEG shooters?).
If the in-camera JPEG processing is a good match for the scene being photographed, then there may not be an advantage to shooting RAW for that image.
It's not a good match unless you can colorimetrically measure the scene and what you produced from the rendering. You can't. So don't go there.
If there is something in the image that the in-camera JPEG processing won't handle well, then there can be a tremendous advantage to shooting RAW.
And that doesn't ever constitute better results (then JPEG shooters?).
Thus the assertion that RAW is "better" is not incompatible with the assertion that one can get excellent results from JPEG.
Who said you can't get excellent results from raw? The only way to KNOW you can do better is properly capture the JEPG then the raw, the later being properly exposed.

You can get an inferior raw if you also capture JPEG the same way; that's a fact. You can get a boatload more DR from the raw than the camera JPEG! You can get a wider bit depth and color gamut. Those are the facts of the data!
 
A basic assumption in this thread has been that one needs to absolutely maximize certain aspects of image quality.
GIGO:Garbage In Garbage Out!
Yes. Start with bad assumptions, and you will get bad conclusions.
Why not maximize the image data quality? You CAN see a difference!
Maximizing image quality is not the same as making a visible difference.

Clearly one won't see a difference in every situation. Assuming one is going to process the RAW file anyway, there are going to be some situations where the resulting print looks the same whether or not the RAW was exposed for RAW or exposed for JPEG.

Consider the difference between a 50 megapixel camera and a 20 megapixel camera (assume the pixel count is the limiting factor on image quality).

A 20 megapixel camera will yield about 456 ppi for an 8x12" print.

A 50 megapixel camera will yield about 721 ppi for an 8x12" print.

While the 50 megapixel camera clearly produces a higher resolution image file, with more details, it is not clear the difference will be visible (or detectable) in an 8x12 print made on a 300 ppi printer.

.

Obviously, not every situation calls for 50 megapixels, and not every situation calls for getting the absolute maximum SNR out of the sensor.

I think it's useful to understand when it's important to expose for RAW and when there won't be a noticeable difference in the result if the RAW was exposed for JPEG.

.

Of course, I would be fascinated to see how you propose proving that there is a noticeably visible difference in every possible shooting situation. Note that this is very differencet than showing that there are some situations where the difference will be visible.
 
... The in camera JPEG is either over exposed because you wish to ideally expose the raw or the raw is under exposed because you ideally exposed the JPEG.
Although you don't address it, this brings up the question as to why cameras are designed in this fashion?

If ETTR is the strategy that provides the best quality, why don't cameras do this by default? If the camera knows that the RAW has been exposed in this way, the camera can certainly adjust for that when it creates the in-camera JPEG.

As it is clearly possible to design a camera that uses ETTR and can correctly produce a JPEG from that RAW, why don't all cameras work this way?

Clearly there must be a reason.

On a side note. Is this functionality (ETTR and corresponding JPEG production available with third party firmware (i.e. Magic Lantern)? If not, would this be a good feature to add to Magic Lantern?


 
A basic assumption in this thread has been that one needs to absolutely maximize certain aspects of image quality.
GIGO:Garbage In Garbage Out!
Yes. Start with bad assumptions, and you will get bad conclusions.
Why not maximize the image data quality? You CAN see a difference!
Maximizing image quality is not the same as making a visible difference.
You CAN see a difference! I can see a difference.

Here is another illustration:

 
...

The in camera JPEG is either over exposed because you wish to ideally expose the raw or the raw is under exposed because you ideally exposed the JPEG.
Although you don't address it, this brings up the question as to why cameras are designed in this fashion?

If ETTR is the strategy that provides the best quality, why don't cameras do this by default?
It does, for raw! You just have to understand the basics of CFA's, image capture, raw data and it's encoding and more importantly, how to properly exposure for it.

When you buy film, there's no assumption you're not a rube have have zero idea how to ideally expose for that media. You test. At least some of us in the old days did with film, just as we do with digital.
If the camera knows that the RAW has been exposed in this way, the camera can certainly adjust for that when it creates the in-camera JPEG.
The PHOTOGRAPHER knows or should know. The camera always produces raw data. Always.
As it is clearly possible to design a camera that uses ETTR and can correctly produce a JPEG from that RAW, why don't all cameras work this way?
ETTR is an old term for optimal exposure, for raw. Nothing more. If you treat the JPEG and raw identically in terms of exposure, you're being as silly as treating ISO 100 transparency film as ISO 400 color neg film the same way out of ignorance.
Clearly there must be a reason.
For some who don't understand the basics of photography and exposure? There is, too little education and testing.

Set for JPEG alone, ignore what's really happening under the hood.

--
Andrew Rodney
Author: Color Management for Photographers
The Digital Dog
http://www.digitaldog.net
 
Last edited:
...
Here is another illustration:

http://schewephoto.com/ETTR/
No one denies that there are images where ETTR can help. In the example you linked ETTR helped reduce shadow noise in a scene that had high dynamic range.

However, this tells us nothing about the visibility of the difference for more mainstream images.
 
But most photographers prefer to shoot raw
Do you have evidence to prove this?
and do it afterwards because they can do a better job on a large screen computer with a more powerful processor.
I meant to say most photographers on DPR. And my survey in another thread confirms this.

My belief is that in the general population of camera users. Jpeg users are in the majority. But the small number of raw users are more passionate about raw and photography in general.
 
Last edited:
...

The in camera JPEG is either over exposed because you wish to ideally expose the raw or the raw is under exposed because you ideally exposed the JPEG.
Although you don't address it, this brings up the question as to why cameras are designed in this fashion?

If ETTR is the strategy that provides the best quality, why don't cameras do this by default?
It does, for raw! You just have to understand the basics of CFA's, image capture, raw data and it's encoding and more importantly, how to properly exposure for it.

When you buy film, there's no assumption you're not a rube have have zero idea how to ideally expose for that media. You test. At least some of us in the old days did with film, just as we do with digital.
I'm sorry, I don't see how your answer addresses the question.

If I understand your position, it is that one should use a different exposure strategy when one wants to maximize the quality of the information in the RAW file, as opposed to when one wants to maximize the quality of an in-camera JPEG.

The reason is that the in-camera JPEG processing will produce an image that is "too bright" if the RAW file is properly exposed.

My question was why are modern cameras designed in this fashion? If the best quality is always obtained by ETTR, why doesn't the in-camera meter do this by default? The in-camera JPEG processing would know about this, and take it into account.

If this is the best strategy in the general case, why aren't cameras designed this way?

 
...

Who said you can't get excellent results from raw? The only way to KNOW you can do better is properly capture the JEPG then the raw, the later being properly exposed.

You can get an inferior raw if you also capture JPEG the same way; that's a fact. You can get a boatload more DR from the raw than the camera JPEG! You can get a wider bit depth and color gamut. Those are the facts of the data!

...
Your position is based on the premise that you know what people want, even when they have other ideas.

Obviously, if the sole criteria for judging a situation is whether or not you like it, then the only possible result is whatever matches your personal taste.

.

On the other hand, if we accept the possibility that some photographers are in different situations, and some photographers place different values on the various aspects of quality, we may find that there are no universal solutions. The solution that's best for one photographer may not be best for another.

An obvious example would be resolution. It's easy to demonstrate that higher resolutions allow images with more detail. Therefore a 50 megapixel camera is by definition capable of creating higher resolution images than a 20 megapixel camera. If resolution was our only concern, then everyone should have a 50 megapixel camera. The reality is that a 20 megapixel camera is more than enough for many situations (and even overkill for some). In many contexts the final print form a 20 megapixel camera will look identical to the final print from a 50 megapixel camera. If you in one of thee situations it may make a lot of sense to deal with the smaller 20 megapixel files. Your workflow will be faster, you will use less storage, and your prints will be visually identical. From a scientific standpoint, you files would be higher quality with more resolution, however you may be in a situation where 20 megapixels is more than "good enough".

.

Bringing this back to ETTR, the question is not what you prefer, nor is it whether or not ETTR offers advantages in some situations. The important question is "what are the situations where ETTR will make a noticeable difference in the final print?"
 
...

The in camera JPEG is either over exposed because you wish to ideally expose the raw or the raw is under exposed because you ideally exposed the JPEG.
Although you don't address it, this brings up the question as to why cameras are designed in this fashion?

If ETTR is the strategy that provides the best quality, why don't cameras do this by default?
It does, for raw! You just have to understand the basics of CFA's, image capture, raw data and it's encoding and more importantly, how to properly exposure for it.

When you buy film, there's no assumption you're not a rube have have zero idea how to ideally expose for that media. You test. At least some of us in the old days did with film, just as we do with digital.
I'm sorry, I don't see how your answer addresses the question.

If I understand your position, it is that one should use a different exposure strategy when one wants to maximize the quality of the information in the RAW file, as opposed to when one wants to maximize the quality of an in-camera JPEG.

The reason is that the in-camera JPEG processing will produce an image that is "too bright" if the RAW file is properly exposed.

My question was why are modern cameras designed in this fashion? If the best quality is always obtained by ETTR, why doesn't the in-camera meter do this by default? The in-camera JPEG processing would know about this, and take it into account.

If this is the best strategy in the general case, why aren't cameras designed this way?
Good question !
 

Keyboard shortcuts

Back
Top