What can and can't be done with m4/3 RAW files

And I might also point out that the gloominati on this forum don't appear to be able to do much more than criticise. With criticism comes responsibility to then properly inform, not just suggest that something isn't right, can be better done with something else. Show exactly what and how things can be done, put your money where your mouth is and not just keyboard spray.

And remember, we're talking about m4/3, not any other brand of camera gear.
 
…when you do the RAW conversion.

I am not sure whether you should regard all the 43/m43 sensors as the same though, was this still a Kodak sensor or had Olympus moved on to Panasonic sensors by that time?

Another thing -- just as there have been improvements to RAW converters, let's not forget the improvements in in-camera processing to JPEG. Olympus was ahead of Panasonic in this respect and understandably -- they had been tutored at Kodak's knee and whatever other failings Kodak had, they sure knew how to make a sensor and to convert it in-camera to JPEG. I neverused the E-x cameras, but I was familiar with Kodak's good JPEG work with the Kodak P880 camera.

BUT -- Olympus (and Panasonic) have moved on from there and do an even better job now.

Am I rationalizing like mad here to avoid your conclusion that I could make my pix look better if I shot RAW? JUST POSSIBLY!!!

But I believe I am doing more -- I believe today's OOC JPEGs are a lot better than they used to be and on balance, I can safely keep shooting JPEG, doing a bit of tweaking in PP and suffering the occasional loss of a pic because of mistakes in exposure, etc.
 
Last edited:
  • Like
Reactions: 108
…when you do the RAW conversion.

I am not sure whether you should regard all the 43/m43 sensors as the same though, was this still a Kodak sensor or had Olympus moved on to Panasonic sensors by that time?

Another thing -- just as there have been improvements to RAW converters, let's not forget the improvements in in-camera processing to JPEG. Olympus was ahead of Panasonic in this respect and understandably -- they had been tutored at Kodak's knee and whatever other failings Kodak had, they sure knew how to make a sensor and to convert it in-camera to JPEG. I neverused the E-x cameras, but I was familiar with Kodak's good JPEG work with the Kodak P880 camera.

BUT -- Olympus (and Panasonic) have moved on from there and do an even better job now.

Am I rationalizing like mad here to avoid your conclusion that I could make my pix look better if I shot RAW? JUST POSSIBLY!!!

But I believe I am doing more -- I believe today's OOC JPEGs are a lot better than they used to be and on balance, I can safely keep shooting JPEG, doing a bit of tweaking in PP and suffering the occasional loss of a pic because of mistakes in exposure, etc.
 
I am not sure whether you should regard all the 43/m43 sensors as the same though, was this still a Kodak sensor or had Olympus moved on to Panasonic sensors by that time?
I'm really referring to 4/3 and m4/3 in the sense of on-going development. Each one is an iteration or development of a previous design. We've gone from CCD, to Live-MOS, to whatever it is now.
I agree with a couple of commentators that, since you put it in the title and repeated it several times that you want to restrict discussion to Micro Four Thirds, you need to start the thread over again using examples from m43 cameras.

No way, NO WAY, can it be said that a CCD sensor from am E Series Oly was developed into a CMOS sensor (as used in all m43 cameras), or that the attributes that you demonstrate for the former constitute a discussion with relevance to CMOS.

Still, it's a nice demonstration of PP for posting in the Olympus SLR Talk forum.

cheers
 
There was a recent debate about what you can and can't extract from highlights with a m4/3 sensor in another thread and how 'other' sensors are much better and more forgiving.
Your first mistake. Highlight recovery isn't just a sensor based thing.
in my view, 4/3 and m4/3 sensors are the same, simply different generations.
Second (massive) mistake.
So here are the two photographs,
Third mistake. Anyone with any experience in looking at digital images can clearly see that the example posted shows only a marginal 'overexposure' in the sky (if any at all). There is still colour and tonality visible in much of the clouded areas in the JPG, therefore its a given that the Raw file will have more headroom available and that much or all of any clipping that is present (and there isn't much in that example) can be recovered.

What is needed is a more difficult example and even then, it really needs the same image taken with different cameras to see how the performance compares.
You mistake is in assuming the OP intended his image to be the ultimate test of dynamic range. He didn't. He was simply posting an example of what's possible. I think his processing result is lovely and impressive.

Nobody here really gives a ... darn ... whether you think some other format offers better highlight recovery.
the first as it came out in the RAW processor without any processing and the second with adjustments:

c56d92d5fe0e4896bbd37654256b04d3.jpg

f202e570593b40968bed86932431fc0e.jpg


--
The way to make a friend is to act like one.
www.jacquescornell.photography
 
No way, NO WAY, can it be said that a CCD sensor from am E Series Oly was developed into a CMOS sensor (as used in all m43 cameras), or that the attributes that you demonstrate for the former constitute a discussion with relevance to CMOS.
You are misreading my intent. The m4/3 'system' developed from the 4/3 system that started with a CCD sensor of the same dimensions and then moved to other sensor designs. When I said iteration of development, I did not mean that a CCD sensor was converted into a Live-MOS sensor (if that's what you are implying).

I'm including 4/3 sensors in this discussion because it's relevant in terms of the history of m4/3. We have long had and invariably will endure for a long time to come, all the naysayers who derided 4/3 and will continue to deride m4/3 because it's not something else. So I wish to show how even older technology can stand up quite well, and so the newer technology should do even better.

If you don't study history, you're bound to repeat it, as has oft been said. So looking back at E-1, E-3 etc files is educational (especially since there are many people still using these cameras and others from the era) in terms of showing what can be extracted from rather ordinary looking files. They will likely have a fair repository of images from these cameras, as do I, and it's informative to learn what can be produced from such antique files.

This is where I intimated about criticism without education. It's all well and good to criticise something, but if you don't, won't or can't go beyond simply criticising, then you might as well put up and shut up. I'm actually trying to demonstrate what I've been saying (walking the walk, not just talking the talk), so that others may hopefully learn and experiment on their own with their own files.
 
There was a recent debate about what you can and can't extract from highlights with a m4/3 sensor in another thread and how 'other' sensors are much better and more forgiving. The discussion comparing sensor qualities really wasn't germane to that thread, but that's typical for most forums.z

So here are the two photographs, the first as it came out in the RAW processor without any processing and the second with adjustments:

c56d92d5fe0e4896bbd37654256b04d3.jpg

f202e570593b40968bed86932431fc0e.jpg

I even surprised myself. And hopefully we don't start seeing the thread filled up with examples from other brands, but that's probably wishful thinking.

--
Thoughts, Musings, Ideas and Images from South Gippsland
http://australianimage.com.au/wordpress/
Thanks for starting this thread. It is a positive example of using post processing to retrieve detail and to generally improve a picture that seems of little interest. Good job. Good discussion.

The improved picture has detail that I don't see in the original picture. The shapes (hills) in the background are much more obvious in the processed image. I don't see the rays of light coming downward in the original image at all.

A question: What did you see when you took the picture? Something like the original image or like the processed one?

--
some of our photos
 
Ray,

I'm not, in any way, interested in any of the arguments (discussions!) but I would very much like to know what software was used and what adjustments were made to turn a rather mundane picture into a brilliant image?

Peter Del
 
No way, NO WAY, can it be said that a CCD sensor from am E Series Oly was developed into a CMOS sensor (as used in all m43 cameras), or that the attributes that you demonstrate for the former constitute a discussion with relevance to CMOS.
You are misreading my intent. The m4/3 'system' developed from the 4/3 system that started with a CCD sensor of the same dimensions and then moved to other sensor designs. When I said iteration of development, I did not mean that a CCD sensor was converted into a Live-MOS sensor (if that's what you are implying).
It's a replacement technology, and a CCD example tells us nothing about what can be done with CMOS.
I'm including 4/3 sensors in this discussion because it's relevant in terms of the history of m4/3. We have long had and invariably will endure for a long time to come, all the naysayers who derided 4/3 and will continue to deride m4/3 because it's not something else. So I wish to show how even older technology can stand up quite well, and so the newer technology should do even better.
Because CCD and CMOS have inherent pros and cons, there are some things that an old CCD might do better than a more recent CMOS, so the lesson is not appropriate.
 
Thanks for starting this thread. It is a positive example of using post processing to retrieve detail and to generally improve a picture that seems of little interest. Good job. Good discussion.

The improved picture has detail that I don't see in the original picture. The shapes (hills) in the background are much more obvious in the processed image. I don't see the rays of light coming downward in the original image at all.

A question: What did you see when you took the picture? Something like the original image or like the processed one?
 
No way, NO WAY, can it be said that a CCD sensor from am E Series Oly was developed into a CMOS sensor (as used in all m43 cameras), or that the attributes that you demonstrate for the former constitute a discussion with relevance to CMOS.
You are misreading my intent. The m4/3 'system' developed from the 4/3 system that started with a CCD sensor of the same dimensions and then moved to other sensor designs. When I said iteration of development, I did not mean that a CCD sensor was converted into a Live-MOS sensor (if that's what you are implying).
It's a replacement technology, and a CCD example tells us nothing about what can be done with CMOS.
I'm including 4/3 sensors in this discussion because it's relevant in terms of the history of m4/3. We have long had and invariably will endure for a long time to come, all the naysayers who derided 4/3 and will continue to deride m4/3 because it's not something else. So I wish to show how even older technology can stand up quite well, and so the newer technology should do even better.
Because CCD and CMOS have inherent pros and cons, there are some things that an old CCD might do better than a more recent CMOS, so the lesson is not appropriate.
 
This is where I intimated about criticism without education. It's all well and good to criticise something, but if you don't, won't or can't go beyond simply criticising, then you might as well put up and shut up. I'm actually trying to demonstrate what I've been saying (walking the walk, not just talking the talk), so that others may hopefully learn and experiment on their own with their own files.
I like this aspect of where you are coming from, so let's see a nice m43 example, i.e. relevance.

P.S. I would typically get exactly the same results using LR in about 5 seconds using a quick pull-down gradation filter on the sky and then dial in a touch of vibrance and contrast. I don't think the result is software-specific and so I agree with you not to turn the thread into what software you used. But you started that aspect of it, unnecessarily, in post #1.
 
Ray,

I'm not, in any way, interested in any of the arguments (discussions!) but I would very much like to know what software was used and what adjustments were made to turn a rather mundane picture into a brilliant image?

Peter Del
I used Capture One Pro 8 for the RAW editing. It was actually surprisingly easy to achieve, entirely in the RAW software.

First off, in the background setting, I adjusted clarity and contrast slightly, as well as highlight (positive adjustment). Then I created a layer and used the gradient mask to further adjust brightness, highlight, clarity and contrast I the sky until I obtained a pleasing balance between detail, colour and tones. The final adjustment was another gradient mask for the bottom half, more or less repeating what I did for the top half (obviously with different adjustments) until I liked the appearance of the foreground.

It's so easy to do this in Capture One, though some say you can do all of this in Lightroom, but after years of using Lightroom, I still haven't mastered those secret sauces it supposedly contains. Much like Photoshop is mostly black magic to me. I'm basically a very simple photographer, so what doesn't require a masters degree to operate is what I prefer.

Without this becoming a debate about RAW software, I did just try the Graduated Filter in Lightroom and it couldn't match the results of Capture One. Maybe it could be taken further in Photoshop and layers etc, but that's way beyond what I can or want to do. I want to also add that I've only been using Capture One for less than a month and I'm surprising myself as to easy it is for me to use.

--
Thoughts, Musings, Ideas and Images from South Gippsland
http://australianimage.com.au/wordpress/
 
Last edited:
Thanks for starting this thread. It is a positive example of using post processing to retrieve detail and to generally improve a picture that seems of little interest. Good job. Good discussion.

The improved picture has detail that I don't see in the original picture. The shapes (hills) in the background are much more obvious in the processed image. I don't see the rays of light coming downward in the original image at all.

A question: What did you see when you took the picture? Something like the original image or like the processed one?
 
Well that's turned out to be a more difficult task than I thought. I wanted to put up some examples from my E-M1, which I've had since the start of 2014. But I've struggled to find similar examples to the previous ones, as the E-M1 seems to cope far better with those sorts of scenes.

Anyway, I found one shot that comes close, shot early morning:



68391d6c7d34453b9442d90af9e626c0.jpg



7224b60ed9fb475492d59626d624e472.jpg

I may have to deliberately stuff up some shots in order to provide examples. I'm not sure that's the idea of modern sensors.

--
Thoughts, Musings, Ideas and Images from South Gippsland
 
That's better, thanks. I know what you mean: I went looking on my work PC (limited selection) but I too found it difficult to find choice subject matter.

Maybe it's the meters. Maybe they should be taking some of the credit for progress. Anyway, interesting.
 
There was a recent debate about what you can and can't extract from highlights with a m4/3 sensor in another thread and how 'other' sensors are much better and more forgiving. The discussion comparing sensor qualities really wasn't germane to that thread, but that's typical for most forums.
As I read that thread you are referring to (from the phone) it sounded like few ill-informed non-photographer Sigma bashers had their say (they know who they are). I don't think there are people who do honestly believe mFT RAW files can not be pushed quite nicely, when needed, that was not even an argument. What I do know, however, is that I hardly remember when it was the last time I had to push D800 files (for highlights or shadows), or Merrill files for the same exactly proof or reason (as the debate was about). There can be a scene, no debate about it. But it is the quality of the edges, the detail, and overall transparency of the image that I am after, in every frame I open.
I was just working on another of my blog posts (I usually start with the photographs and then begin to compile the story), and one of the photos I'm going to use is a perfect example of highlight recovery. Now the photograph was taken with an E-3 in 2008, but in my view, 4/3 and m4/3 sensors are the same, simply different generations.
They are of the same size, quite different in what they deliver.
I've used this photograph years ago in another capacity, but using my current RAW software (which shall not be named lest it draws the ire of competitors' fanboys/girls) I discovered how much I could extract from the RAW file. Now this is not about RAW converters, so I hope that we don't decent into a rabid battle on that front. It's about what you can extract from a m4/3 RAW file.
This actually looks like Nik (or whatever it is now) detail extractor filter, likely used on TIFF file. So what, it does not work on all scenes and files, and you better be very careful with it if you want your images to look realistic and natural.
So here are the two photographs, the first as it came out in the RAW processor without any processing and the second with adjustments:

I even surprised myself. And hopefully we don't start seeing the thread filled up with examples from other brands, but that's probably wishful thinking.
No kidding. Here is a camera phone (one of many) from 2011, the image is from 2012; white snow highlights pushed down, deep forest shadows pulled up. E-3 was 10MP, Galaxy-2 is only 8MP, proves nothing, but at least it is realistic. Don't you agree?



9076582947_d3c8c2787c_o.jpg






--
- sergey
 
No way, NO WAY, can it be said that a CCD sensor from am E Series Oly was developed into a CMOS sensor (as used in all m43 cameras), or that the attributes that you demonstrate for the former constitute a discussion with relevance to CMOS.
You are misreading my intent. The m4/3 'system' developed from the 4/3 system that started with a CCD sensor of the same dimensions and then moved to other sensor designs. When I said iteration of development, I did not mean that a CCD sensor was converted into a Live-MOS sensor (if that's what you are implying).
It's a replacement technology, and a CCD example tells us nothing about what can be done with CMOS.
I'm including 4/3 sensors in this discussion because it's relevant in terms of the history of m4/3. We have long had and invariably will endure for a long time to come, all the naysayers who derided 4/3 and will continue to deride m4/3 because it's not something else. So I wish to show how even older technology can stand up quite well, and so the newer technology should do even better.
Because CCD and CMOS have inherent pros and cons, there are some things that an old CCD might do better than a more recent CMOS, so the lesson is not appropriate.

--
Arg
Now you're being utterly pedantic and missing the point of this thread entirely. The idea represented in this article could have been written just for you, and a few others: http://www.dailymaverick.co.za/opin...belong-in-public-policy-debates/#.Vbyvt88w-Uk.
Oz, this thread is out of control.

I love your photograph(s), and your demonstration is useful. Call it Highlight Recovery 101 with pleasing landscape photographs. Great.

But now you've lost your way and you're sounding super word-salady. There's your last post, here, in which you've now pedantically accused someone else of being pedantic. Which was funny, but in that awkward-watching-someone-trainwreck-themselves sort of way.

And then there's your talk of big things--studying history and legacies and being doomed to repeating the past--and WTF are you even talking about? It's a camera sensor, not the second world war.

If you post propositions, people are going to agree and disagree. That's how free and open discussions work. I hate to sound like your detractors, here, but you gotta work on handling constructive criticism a little better. A lot of the people who've commented on your proposition have had good, useful points. They all deserve to be heard.

And in this particular branch of the thread--your spat with TN Arg--you're prioritizing defensiveness over making sense. One corner of your mouth is saying that you never confused CCD and CMOS technologies while the other corner of your mouth insists of lumping them together for the good of history or . . . something? Word salad.

How 'bout you just say, "Hey TN, good catch. That's a great point. Thanks!" You don't have to be 100% right about everything all the time. DPReview forum isn't a "game" you're supposed to "win."

I think it might time for you to simmer down a little bit.
 
Last edited:
There was a recent debate about what you can and can't extract from highlights with a m4/3 sensor in another thread and how 'other' sensors are much better and more forgiving. The discussion comparing sensor qualities really wasn't germane to that thread, but that's typical for most forums.

I was just working on another of my blog posts (I usually start with the photographs and then begin to compile the story), and one of the photos I'm going to use is a perfect example of highlight recovery. Now the photograph was taken with an E-3 in 2008, but in my view, 4/3 and m4/3 sensors are the same, simply different generations.

I've used this photograph years ago in another capacity, but using my current RAW software (which shall not be named lest it draws the ire of competitors' fanboys/girls) I discovered how much I could extract from the RAW file. Now this is not about RAW converters, so I hope that we don't decent into a rabid battle on that front. It's about what you can extract from a m4/3 RAW file.

So here are the two photographs, the first as it came out in the RAW processor without any processing and the second with adjustments:

c56d92d5fe0e4896bbd37654256b04d3.jpg

f202e570593b40968bed86932431fc0e.jpg

I even surprised myself. And hopefully we don't start seeing the thread filled up with examples from other brands, but that's probably wishful thinking.

--
Thoughts, Musings, Ideas and Images from South Gippsland
http://australianimage.com.au/wordpress/
which raw converter? I very much like the result and tire of the bickering in the thread, but would like to know... and nice image still, I like it.
 
The issue is that people can be very dismissive of what m4/3 can deliver, always citing things like lack of DR etc. And this is always demonstrated by using images from another format. There's no point in doing so unless one can produce the same scene, taken at the same time with both formats.

I'm attempting to demonstrate to m4/3 users, not to those who use other brands, what they can get from their cameras if their immediate results don't look all that good. Many new users especially, can have the perception that when a shot doesn't work out, it's the camera that's at fault, because people keep saying m4/3 isn't good enough.

Phone cameras as well can produce excellent results in the right conditions and they are getting better all the time. In fact, there's probably more effort going into camera phone technology than any other camera technology at the moment. But that's not the point.

--
Thoughts, Musings, Ideas and Images from South Gippsland
http://australianimage.com.au/wordpress/
 
Last edited:

Keyboard shortcuts

Back
Top