Will the 5D4 & 1D X II really get new sensors?

Yes and no. It all (in my opinion( depends on what taste you have in developing the final picture. For landscape photography which I do a lot there is no picture ever that would be a final picture with such a simple approach. Again to avoid misunderstandings I'm not trying to talk down to this approach or any other way of processing that is liked. I'm just saying that for a lot fo photography there is (in myn option) not much use for a flat (or close to) conversion that will make it to a final picture. I bracket my shots and choose the exposure that does not have essential highlights clipped. There may be blacks clipped with all sliders in the zero position which in some cases will indicate that an HDR merge is needed, but in most cases it is not. So what I do is to adjust exposure, highlights, shadows, whites and blacks initially to match most closely the look I like. I use the tone curve to fine tune contrast and seldomly use the contrast slider. It may be just a part of the picture if there are shadows that need adjustment which is then done by either grad filters, radial filter and brushed. They may overlap to get what I want. Clarity and saturation/vibrance is often not adjusted at all. White balance is often adjusted globally or locally. If I have several compositions that are similar then I will copy edits from one picture to others that are similar and fine tune. Then I will choose which one(s) I find the best. That is roughly my approach. I can edit most pictures within a couple of minuts. I'm not trying to say that the profile does not matter and I choose the one that works well with my taste and approach. So my advice is to practice the adjustments in Lightroom so it is quick to arrive at the look that is preferred for the individual and know what slider movements that approach that look.
 
If this indiacates share by amount, then it does say absolutely nothing about R&D. If it is by amount of money earned by selling image sensors, then it starts to show and say little more....
It is share by value. The reason Canon even gets onto the chart is that by and large its sensors are large, high value items.

--
Bob.
“The picture is good or not from the moment it was caught in the camera.”
Henri Cartier-Bresson.
Can be... Now does sensor R&D money come only from sensor sales with Canon, or is it also payed from whole cameras and lenses and other device? Do we know what percentage of this money goes to RD across those companies? I see this graph having close to no corelation to the future. It is very static and very small piece of data to evauate a company and conclude anything.
That doesn't make a whole load of difference.
It CAN.
Business economics dictates that if you're not making so making so many sensors you can't put so much R&D spend into them, otherwise you end up with your sensors costing a lot more than the competitions.
To a point, and even in the disagreement part, there is one part to agree. Yes, Canon might have more expensive sensors. Because most of these are APS-C and FF sensors. Sony has cheap sensors, because most of these are camera sensors. Those need to be cheaper because of size and purpose. And those can be cheaper, because less material used, better yields per wafer etc. So it doesnt bring us to any usable conclusion.

There is only so much that you can cross-subsidise different parts of your business, in the end, they all have to pay. And Canon's sensors are likely more expensive in any case, because they are making them on 200mm wafers while Sony is using 300mm.
True. I don´t deny that those sensors are more expensive. But Canon propably make these with well payed-back inventory.
That's fewer sensors for the same amount of processing - then Sony is likely putting more through its fab lines, so less capital cost per sensor. In the end for this kind of product, the economies of scale always win.
Yes. They´re propably set to win this way. But it doesn´t mean that Canon RD lake is dry, and will loose another generation after the new one.

It is not always RD and economies of scale, what can make difference. Common and free knowledge doesn´t let too many companies to die just like that, and on the other side, patents can do that to any company in long term war.

I still believe there is no usable data for end customer for next four-five years. No forecast for enthusiasts to know what will be the next step of each company.
--
Bob.
“The picture is good or not from the moment it was caught in the camera.”
Henri Cartier-Bresson.
 
Hans Kruse wrote:

Yes and no. It all (in my opinion( depends on what taste you have in developing the final picture. For landscape photography which I do a lot there is no picture ever that would be a final picture with such a simple approach.
Final picture? Simple approach?

I'm talking about setting up how images are imported into LightRoom BEFORE you start to work on them.

My initial question was to find out if there actually IS a clipping problem in LR related to the 5Ds(r) cameras, or if it simply is related to people not calibrating their import profile…

Take care
 
Hans Kruse wrote:
Yes and no. It all (in my opinion( depends on what taste you have in developing the final picture. For landscape photography which I do a lot there is no picture ever that would be a final picture with such a simple approach.
Final picture? Simple approach?

I'm talking about setting up how images are imported into LightRoom BEFORE you start to work on them.

My initial question was to find out if there actually IS a clipping problem in LR related to the 5Ds(r) cameras, or if it simply is related to people not calibrating their import profile…

Take care
 
Hans Kruse wrote:

In my opinion you cannot determine if there is clipping this way. You need to use a program that analyses the RAW file like e.g. Rawdigger. Lightroom has automatic highlight recovery always turned on (PV 2012) so you cannot determine if there is clipping.
If the recovery is automatic or not is secondary to me—as long as the data is there. The looks of the imported file can also we tweaked, like I mentioned earlier.

Now, it would be a problem if 5Ds files show recoverable highlights in Canon's own software, but are clipped (and unrecoverable) in Lightroom. This is what I was wondering: is the actual underlying data not there in Lightroom for these two models?
For normal import into Lightroom you can do what you wrote but I'm not really seeing the value in this, at least for a lot of photography.
The value is that you can reduce the sliders you have to move on each image. Set your lens corrections, sharpening values and other tweaks that you often perform so that the software does it for you directly on import.

For people who push and pull each slider on every image—no matter what—it's of little use of course.
 
Hans Kruse wrote:
In my opinion you cannot determine if there is clipping this way. You need to use a program that analyses the RAW file like e.g. Rawdigger. Lightroom has automatic highlight recovery always turned on (PV 2012) so you cannot determine if there is clipping.
If the recovery is automatic or not is secondary to me—as long as the data is there. The looks of the imported file can also we tweaked, like I mentioned earlier.
But that's the issue with the automatic highlight recovery that you don't knpw if the data is there. When the clipping indicators turn on it does not mean there is clipping in the RAW file. There can be significant clipping in the RAW file and no clipping indicators in Lightroom. Change of White balance alone can turn on or turn off the clipping indicators.
Now, it would be a problem if 5Ds files show recoverable highlights in Canon's own software, but are clipped (and unrecoverable) in Lightroom. This is what I was wondering: is the actual underlying data not there in Lightroom for these two models?
I do not use DPP and have only checked a few times. Same with Capture One. But it is unlikely that Lightroom will show clipping and DPP not. DPP does not (to my knowledge) have automatic highlight recovery. Highlight recovery is not related to the highlights slider btw. It's an algorithm that determines if data can be reconstructed in areas where there are clipping of data in the RAW file. There is no way to influence this algorithm which inly works in PV2012.
For normal import into Lightroom you can do what you wrote but I'm not really seeing the value in this, at least for a lot of photography.
The value is that you can reduce the sliders you have to move on each image. Set your lens corrections, sharpening values and other tweaks that you often perform so that the software does it for you directly on import.
You made very clear that a preset like you mentioned was not a final edit or near it. If it works for you that's fine. You can make lots of presets, but presets are not really that great since you can only put absolute values in presets.
For people who push and pull each slider on every image—no matter what—it's of little use of course.
I have a few presets that I use, but the ost useful imho is to edit one picture and copy the settings to similar pictures and then adjust. But that's getting away from what I understand is your real point: Profiles and what role they play. But profiles do not clip per se although in theory they could.
 
The penalty of Canon not having the volume to put sufficient R&D funding into its sensors is just playing out. They are 10 years behind Sony with column ADC.
Hi Bobn2: I realize you have far more knowledge on sensor fab than myself. That being said, it may true Canon started 10 years behind Sony it does not mean they are actually 10 years behind presently. Canon does not need to reinvent the wheel so it is possible Canon column ADC will catch up to the current Sony chips within a couple of years max (and Sony will make some incremental updates in the meantime). Within a few years the column ADC will be very mature tech and there may be little difference between manufacturers at that time. Then again, I could be completely out in left field on this :-)

Of course somebody else will eventually come up with another breakthrough in another area of sensor fab and reset the bar even higher :-) Cheers.
The point is that Canon people tend to concentrate on ownership sensor fab. It's not a big issue. Most image sensing producers use foundry fab, the reason being that until you get as big as Sony, there is not enough business to keep a fab line busy enough to justify keeping it up to date.
depends on the size of the fab - canon needed two plants to manufacturer 6 million sensors per year. since they still manufacturer around 5.8 milllion sensors per year, going by initial volumes, they are keeping both plants busy.

Which of course leads to another question - how did canon produce at one time, 10 million a year.

they of course may have improved / updated their equipment without notifying dpreview.
Using foundry, your sensors are mixed in with other business, and you can choose an up to date fab line when you need it. Canon's problem has been that they are locked into two fab lines, and only one has small enough geometry to do complex circuitry like column ADCs, they couldn't just abandon the old one, because that would have meant writing off the capital.
the capital would have been written off years ago. not to mention it would be been simply equipment replacement what would have been near EOL anyways. you don't have to "abandon" a fab because the equipment is no longer in use. you CAN upgrade the equipment. it's costly, but it depends on the wafers/ month to how much it does scale for cost. considering that most of the equipment would have been from canon themselves - what really would have been the cash cost?

the first plant was built I believe 10 years ago, the second, 7 years ago.
This new generation of Canon sensors probably means that they have moved onto the newer 180nm fab line for their large sensors, and will finally retire the old 500nm line.
one small note though - the 250MP prototype was done on 130nm. there's an entire possibility that canon simply shut down and replaced the 180nm line altogether.

that acquisition cost would have been pocket change in canon's R&D expenditures.

They probably moved to it to smaller geometries with DPAF since the QE from that generation onwards - jumped, and the noise characteristics from the sensors changed. for starters, there's an extra switch for DPAF, wiring, not to mention the pixel itself is cut in half, and with that the QE still jumped over the older 18Mp sensor.

it also explains why they "sat" at 18MP for so long; as since then canon has come out with different sensors for the 70D, 7DII, 80D, T6 in basically 2.5 years. Canon's rapidly changing it's sensors now like they used to.

as far as "catching up" it's usually easier to catch up then it is to break new ground, but it's a matter of patent portfolio. it's not a matter of canon needing years to catch up if they managed to secure the patents necessary to develop a suitable sensor. Some of those patents cleared very late last year.

Also the differences now in between the senors are all within the realms of diminishing returns. the difference between an 80D and a latest tech A6300 is negligible, at the level of pushing you have to do to see the difference, you have other problems cropping up - loss of microcontrast, resolution, clarity, color casting,etc.
 
Last edited:
Yes and no. It all (in my opinion( depends on what taste you have in developing the final picture. For landscape photography which I do a lot there is no picture ever that would be a final picture with such a simple approach. Again to avoid misunderstandings I'm not trying to talk down to this approach or any other way of processing that is liked. I'm just saying that for a lot fo photography there is (in myn option) not much use for a flat (or close to) conversion that will make it to a final picture. I bracket my shots and choose the exposure that does not have essential highlights clipped. There may be blacks clipped with all sliders in the zero position which in some cases will indicate that an HDR merge is needed, but in most cases it is not. So what I do is to adjust exposure, highlights, shadows, whites and blacks initially to match most closely the look I like. I use the tone curve to fine tune contrast and seldomly use the contrast slider. It may be just a part of the picture if there are shadows that need adjustment which is then done by either grad filters, radial filter and brushed. They may overlap to get what I want. Clarity and saturation/vibrance is often not adjusted at all. White balance is often adjusted globally or locally. If I have several compositions that are similar then I will copy edits from one picture to others that are similar and fine tune. Then I will choose which one(s) I find the best. That is roughly my approach. I can edit most pictures within a couple of minuts. I'm not trying to say that the profile does not matter and I choose the one that works well with my taste and approach. So my advice is to practice the adjustments in Lightroom so it is quick to arrive at the look that is preferred for the individual and know what slider movements that approach that look.
 
What does the new 80D sensor suggest? Some thought this sensor would be a precursor.

RAW here:

https://app.box.com/s/yy0jd0m8p8ycc5fdxyh0rrdsl24cx3k4

--
Once you've done fifty, everything else is iffy.
I've been wondering if perhaps Nikon made a licensing deal with Sony back in 2010 to get access to the column ADC technology. The D7000 (Sept 2010) was the first Nikon to have clean shadows, and no one other than Sony and Nikon had that for a few years, so maybe it was an exclusivity thing. Is it just a coincidence that 5 years later, Nikon have released a camera that doesn't seem to benefit from Sony technology? Was it a 5 year deal that specifically sought to stop Canon using such technology? Seems a bit odd that at the same time Nikon appears to have stopped using column ADCs, Canon appears to started.

Anyway, IF that's the case, I would expect Canon sensors to be much improved in terms of low ISO shadow noise, and since that looks like what has happened with the 80D and 1Dx II, yes, they (and the 5DIV) really have got new sensors.
Nikon D7000 vs Canon 80D. Is this a simple coincidence only?





b6495d268d374e3a9c5af970beb1bee7.jpg



--
Cenk Ogurtani
facebook.com/CenkOgurtaniWildlifePhotography
 
It's hard to go through the whole thread and see what I missed, even though this wasn't directed toward me, the person who asked how to use the tone curve instead of the contrast asked a very important question which can advance his photography.

The tone curve does produce the contrast, the luminosity curve is what all of the sliders are based off of, except for the ones that involve color. In that case, you go into the single Red, Green, and Blue curves. If you crank the contrast slider, it contrast the image for you but not as precise as you can get it if you do it yourself in the curve. The curve is your shadows, mid shadows, higher shadows, low part of the mid tones, midtones, the higher part of your mid tones, and the same with highlight.

As far as the colors, all of the colors are in the RGB curves. Learning this will help your photography greatly.

What happened with me, is I had to learn how to be a colorist and color grade videos. In video we are used to using the curves and color wheels. For some reason photography does not adopt the scopes that video programs have, such as the RGB parade, or the luminosity one where you can see how it works.

If you don't want to do video but have Premiere Pro, or download a free trial for 30 days, go to the color tab, import your photo, bring up all 4 of the scopes. Color correct and grade your image with the curves. The color wheels are basically curves as well, they are like the pivot points in the curves.

Once you learn the curves well, you've learned how to color well. I don't touch the sliders in Lightroom or any raw processor, I go right to the curves. I don't use the split tone sliders, I do it myself in the RGB curves.

The RGB curves do everything except for hue, saturation, luminance and a few more things. For example if you use CFL bulbs and you notice your image is green, the green is not over saturated but most photographers think it is and they pull the greens out, which is wrong. You want to even the greens out with the red and the blues, understandably if you've never done video this is not obvious as you can't see a scope showing you. That I know of anyway. The colors in the histogram are not sufficient enough like a scope.

Too much green in an image means that you want to even it out by pulling the green back in which ever area on the curve it's too high in, once even, the image will be correct and then you stylize to taste. But most photographers, if they seen a green image, they would pull the green saturation down.

Learning the curves is so important and I believe you get good at them more quickly if you are willing to edit video.

Adobe SpeedGrade CC has a 30 day trial, also DaVinci Resolve is completely free and most consider it more powerful than SpeedGrade, however I use both and Speedgrade does some things that Resolve can't do so easily.

I also believe learning Photoshop when working with color seeing inputs and outputs will help more so than just Lightroom.
 
But that's the issue with the automatic highlight recovery…
Hi, Hans. I feel like we're threading water. I haven't asked about highlight recovery and I didn't bring it up. The only thing I did was suggest a method of tuning the default import processing (develop settings) to be less contrasty—because it wasn't clear to me (and still isn't) if the issue was the LOOK of the imported file that was annoying (non issue), OR if data is getting lost, simply by using Lightroom (big issue).
You made very clear that a preset like you mentioned was not a final edit or near it. If it works for you that's fine. You can make lots of presets, but presets are not really that great since you can only put absolute values in presets.
I don't think I've ever used a preset in Lightroom (found in the left panel), other than to test them once or twice out of curiosity.

I was talking about tuning the develop settings, and that is not the same to me. In my opinion everyone should go through them and check that everything is set to their preferred default and then save them to that camera model. This to make sure that even though each file might need further work, at least the starting point has been optimised.
But that's getting away from what I understand is your real point…
Yes, I was wondering about data corruption, alternatively if Lightroom throws away data that was there to begin with (for this specific camera model).
 
Last edited:
The penalty of Canon not having the volume to put sufficient R&D funding into its sensors is just playing out. They are 10 years behind Sony with column ADC.
Hi Bobn2: I realize you have far more knowledge on sensor fab than myself. That being said, it may true Canon started 10 years behind Sony it does not mean they are actually 10 years behind presently. Canon does not need to reinvent the wheel so it is possible Canon column ADC will catch up to the current Sony chips within a couple of years max (and Sony will make some incremental updates in the meantime). Within a few years the column ADC will be very mature tech and there may be little difference between manufacturers at that time. Then again, I could be completely out in left field on this :-)

Of course somebody else will eventually come up with another breakthrough in another area of sensor fab and reset the bar even higher :-) Cheers.
The point is that Canon people tend to concentrate on ownership sensor fab. It's not a big issue. Most image sensing producers use foundry fab, the reason being that until you get as big as Sony, there is not enough business to keep a fab line busy enough to justify keeping it up to date.
depends on the size of the fab - canon needed two plants to manufacturer 6 million sensors per year. since they still manufacturer around 5.8 milllion sensors per year, going by initial volumes, they are keeping both plants busy.
That is not clear. The gist of the press release for the newer fab was that it was to make sub-APS-C sensors, which at a time it was getting from Sony.
Which of course leads to another question - how did canon produce at one time, 10 million a year.
Quite simply, it had two fabs running, the 500nm one for APS-C and bigger and the 180nm one for the in-house 1/7" sensors it was making (probably was selling several million of those, they put it in several high volume cameras). Now, its buying the1/7" from Sony again (and some of those products have gone to 1") and moving APS-C and above to the 180nm line, so the 500nm line is phasing out.
they of course may have improved / updated their equipment without notifying dpreview.
They never have informed DPReview, when they open a new line, thy put aout a press release and write about it in their annual report. You'd wonder why they'd want to keep quiet about major investments, particularly in their annual report.
Using foundry, your sensors are mixed in with other business, and you can choose an up to date fab line when you need it. Canon's problem has been that they are locked into two fab lines, and only one has small enough geometry to do complex circuitry like column ADCs, they couldn't just abandon the old one, because that would have meant writing off the capital.
the capital would have been written off years ago.
That depends, doesn't it? A fab line might cost $1bn for a cheap one. Lets' assume the Canon ones weren't new, so used price $500M. If they amortise $10 capital costs per señor, that's 50M sensors they have to make to pay for the fab line, that's about 10 years.
not to mention it would be been simply equipment replacement what would have been near EOL anyways. you don't have to "abandon" a fab because the equipment is no longer in use. you CAN upgrade the equipment. it's costly, but it depends on the wafers/ month to how much it does scale for cost. considering that most of the equipment would have been from canon themselves - what really would have been the cash cost?

the first plant was built I believe 10 years ago, the second, 7 years ago.
That would fit, with the figures above. The first line has paid for itself, it can go. The second, maybe not (because you can't really add $10 to the cost of a 1/1.7" sensor without making it completely uncompetitive)
This new generation of Canon sensors probably means that they have moved onto the newer 180nm fab line for their large sensors, and will finally retire the old 500nm line.
one small note though - the 250MP prototype was done on 130nm. there's an entire possibility that canon simply shut down and replaced the 180nm line altogether.
That would be very strange. More likely that the 250MP was made in a foundry, maybe an extension of the contracts Canon already have to do part of their wafer processing (the BEOL) at foundries.
that acquisition cost would have been pocket change in canon's R&D expenditures.
Not really. It is a big enough investment it would have had to have appeared in their annual report to investors.
They probably moved to it to smaller geometries with DPAF since the QE from that generation onwards - jumped, and the noise characteristics from the sensors changed. for starters, there's an extra switch for DPAF, wiring, not to mention the pixel itself is cut in half, and with that the QE still jumped over the older 18Mp sensor.
Speculation, we'll see with the next Chipworks report.
it also explains why they "sat" at 18MP for so long; as since then canon has come out with different sensors for the 70D, 7DII, 80D, T6 in basically 2.5 years. Canon's rapidly changing it's sensors now like they used to.
I think the step to 180nm would explain all that. Finally they decided to go to Sony for the new 1/1.7" sensors, retire the old line and use the newer one for big sensors. Fits what's known better than the idea that they have an unannounced new 130nm fab line. Moreover, even 180nm was right at the limit of i-line steppers, and Canon does not have an immersion product. So, if Canon really does have its own 130nm facility, it will have to be equipped with Nikon or ASML kit (possibly that would be why they didn't want to publicise it :-) )
as far as "catching up" it's usually easier to catch up then it is to break new ground, but it's a matter of patent portfolio. it's not a matter of canon needing years to catch up if they managed to secure the patents necessary to develop a suitable sensor. Some of those patents cleared very late last year.
Personally, In don't think it has much to do with patents. Much of the performance has to do with fine tuning over time, rather than patents. But sure, yes, Canon won't take 10 years from where they are now to get to where Sony are now. But when they get there, Sony will have moved on. They are already making BSI FF sensors, and I have a suspicion that what's in the D500 is a Sony APS-C stacked sensor.
Also the differences now in between the senors are all within the realms of diminishing returns. the difference between an 80D and a latest tech A6300 is negligible, at the level of pushing you have to do to see the difference, you have other problems cropping up - loss of microcontrast, resolution, clarity, color casting,etc.
It's moving to a different sphere with stacked sensors and BSI. That's more about fast lens performance and full frame video read-out rates, global shutters and other facilities.

--
Bob.
“The picture is good or not from the moment it was caught in the camera.”
Henri Cartier-Bresson.
 
Last edited:
But that's the issue with the automatic highlight recovery…
Hi, Hans. I feel like we're threading water. I haven't asked about highlight recovery and I didn't bring it up. The only thing I did was suggest a method of tuning the default import processing (develop settings) to be less contrasty—because it wasn't clear to me (and still isn't) if the issue was the LOOK of the imported file that was annoying (non issue), OR if data is getting lost, simply by using Lightroom (big issue).
I just tried to explain to you what I felt was in your questions. Highlight recovery is an important part of that. It was not clear to me that you were aware of that. But anyway that's the issue of trying to explain something :)
You made very clear that a preset like you mentioned was not a final edit or near it. If it works for you that's fine. You can make lots of presets, but presets are not really that great since you can only put absolute values in presets.
I don't think I've ever used a preset in Lightroom (found in the left panel), other than to test them once or twice out of curiosity.

I was talking about tuning the develop settings, and that is not the same to me. In my opinion everyone should go through them and check that everything is set to their preferred default and then save them to that camera model. This to make sure that even though each file might need further work, at least the starting point has been optimised.
I was just trying to be helpful, but if you didn't get any help from this that's fine.
But that's getting away from what I understand is your real point…
Yes, I was wondering about data corruption, alternatively if Lightroom throws away data that was there to begin with (for this specific camera model).
It's not throwing away data. Unless you consider potentially less than optimal algoritms in Lightroom ;)
--
Kind regards,
Hans Kruse
Home Page -- http://www.hanskrusephotography.com , http://500px.com/hanskrusephotography, http://www.hanskrusephotography.zenfolio.com
Workshops -- http://www.hanskrusephotography.com/Hans-Kruse-Photo-Workshops/Workshops
Facebook Photography http://www.facebook.com/HansKrusePhotography
Workshop Newsletter signup http://eepurl.com/bA0Pj
 
Last edited:

Keyboard shortcuts

Back
Top