Competition from Nikon

wishful thinking

the image is dreadful as it is. scaling the sensor up and lkeeping
the same pixel pitch then scaling the image down doesnt make it any
better. You still end up with a dreadful looking image

--
Michael Salzlechner
http://www.PalmsWestPhoto.com
You can try it yourself by using the samples on this site. I did with some of the ISO 1600 samples. The Fuji sensor is 7.8x4.1 mm, the FF sensor is 36x24 mm. Crop a corresponding piece of a full FF-image and compare to a full un-cropped image from the F10. Print them both at the same size and see which you prefer.
BTW its not whisful thinking since I dont own a Fuji.
--
http://www.pbase.com/interactive
 
There are occasionally significant new approaches to sensor design.
Foveon X3 is one. A/D conversion at each pixel could be another.
Though I would not expect the latter to improve fundamental noise
performance so much as to improve highlight handling and dynamic
range. There could certainly be some large leaps, but I'd hardly
count on it.
If noise is low (say, at low ISOs), the 12-bit digitization may reduce DR somewhat from the limit of the cells themselves. 14 or 16 bit digitization would alleviate this "problem" (if it exists).

--
Lee Jay
(see profile for equipment)
 
There are occasionally significant new approaches to sensor design.
Foveon X3 is one. A/D conversion at each pixel could be another.
Though I would not expect the latter to improve fundamental noise
performance so much as to improve highlight handling and dynamic
range. There could certainly be some large leaps, but I'd hardly
count on it.
If noise is low (say, at low ISOs), the 12-bit digitization may
reduce DR somewhat from the limit of the cells themselves. 14 or
16 bit digitization would alleviate this "problem" (if it exists).

--
Lee Jay
(see profile for equipment)
That makes sense, thanks.
--
Regards,
DaveMart
Please see profile for equipment
 
You might get a small improvement in noise floor, but it is already dominated by other factors, not the A/D so it won't be much of a win. Think of per pixel A/D as allowing logarithmic response per pixel. (There are other ways to do this too.) This is useful for making highlights blow out gradually rather than clipping hard. There is a lot less improvement in the shadows as they are already pushing up against physical limits of light emission and sensing.

One can also use the per-pixel A/D to not use bits for data that is effectively just noise. But this is an efficiency improvement, not a lowering of image noise. Claiming it improves noise performance would be like claiming low noise for an image that has flushed the shadows to black. If one combines that with exposing to the right, one will get very little visible noise, but at the cost of lowered sensitivity.

Another way to look at it is to think of Fuji's HR technology. Part of the pixel area is being used for a less sensitive sensing element. So the sensitivity of the chip is theoretically a little lower, or you get more noise, take your pick. But the highlight handling is (theoretically) much improved.

People make a big deal of 12 bit vs 14 or 16 bit A/D, but unless the sensor itself has enough dynamic range, it makes little difference. (E.g. the Leica DMR crowd going on about how Canon is practicing cost cutting by not putting more A/D resolution in the 1DsMkII. the DMR uses a CCD that might benefit from a couple extra bits. The entire sensor design etc. has to be able to deliver the bits.)

-Z-
 
There are some approaches to high dynamic range sensing that work
via pixel level A/D conversion, e.g.:

http://isl.stanford.edu/groups/elgamal/abbas_publications/C070.pdf

Note that this approach requires multiple exposures, but the the
sum of the exposure times will be within a factor of 2 of the
longest exposure.

--
Ron Parr
Digital Photography FAQ:
http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
Don't think that is what Z is talking about here, Ron - that is pretty specialised.
--
Regards,
DaveMart
Please see profile for equipment
 
Don't think that is what Z is talking about here, Ron - that is
pretty specialised.
Actually, the paper referenced by Ron was one of the things I was thinking of in terms of proof that you can build sophisticated conversion into the pixel structure. There is some question of what to do with the technology, but it is an example of a fundamental change in the way sensors are built that might have a significant effect on end user percieved image quality.

Whether the chip is doing multiple exposures internally or not doesn't have to be made explicit to the user so long as the total exposure time is as expected. In fact there is informed speculation that Canon already does this for long exposures on various cameras (D60/10D/20D IIRC).

-Z-
 
You might get a small improvement in noise floor, but it is already
dominated by other factors, not the A/D so it won't be much of a
win. Think of per pixel A/D as allowing logarithmic response per
pixel. (There are other ways to do this too.) This is useful for
making highlights blow out gradually rather than clipping hard.
There is a lot less improvement in the shadows as they are already
pushing up against physical limits of light emission and sensing.

One can also use the per-pixel A/D to not use bits for data that is
effectively just noise. But this is an efficiency improvement, not
a lowering of image noise. Claiming it improves noise performance
would be like claiming low noise for an image that has flushed the
shadows to black. If one combines that with exposing to the right,
one will get very little visible noise, but at the cost of lowered
sensitivity.

Another way to look at it is to think of Fuji's HR technology. Part
of the pixel area is being used for a less sensitive sensing
element. So the sensitivity of the chip is theoretically a little
lower, or you get more noise, take your pick. But the highlight
handling is (theoretically) much improved.

People make a big deal of 12 bit vs 14 or 16 bit A/D, but unless
the sensor itself has enough dynamic range, it makes little
difference. (E.g. the Leica DMR crowd going on about how Canon is
practicing cost cutting by not putting more A/D resolution in the
1DsMkII. the DMR uses a CCD that might benefit from a couple extra
bits. The entire sensor design etc. has to be able to deliver the
bits.)

-Z-
Sounds to me that we are approaching the GIGO limits of the pixels, highlights excepted - and Canon also think so - hence the push for larger chips.

(mounts hobbyhorse) - that is why Canon is going to put a 1.3 in the 20D replacement, or we will only see small incremental inprovements on the 20D else- and the D200 looks formidable enough so that Canon are going to need something better - they aren't about to start putting 45pt AF in a non-1series camera or something
--
Regards,
DaveMart
Please see profile for equipment
 
You need a strong reason to give up pixel area, so I suspect it
will remain specialized for a while.
It would be interesting to know what sort of lithography various manufacturers are using to manufacture sensors. My guess is that it trails the leading edge for certain other types of chips by say two generations. Adding more complexity to the per pixel circuit makes more sense as lithography gets finer. Though at any given time, it implies a more expensive sensor than one using a simpler circuit.

-Z-
 
I will study the link in detail - hopefully that means that we wouldn't have to make the same sacrifices and accept the same complications as Fuji have for the S3
Don't think that is what Z is talking about here, Ron - that is
pretty specialised.
Actually, the paper referenced by Ron was one of the things I was
thinking of in terms of proof that you can build sophisticated
conversion into the pixel structure. There is some question of what
to do with the technology, but it is an example of a fundamental
change in the way sensors are built that might have a significant
effect on end user percieved image quality.

Whether the chip is doing multiple exposures internally or not
doesn't have to be made explicit to the user so long as the total
exposure time is as expected. In fact there is informed speculation
that Canon already does this for long exposures on various cameras
(D60/10D/20D IIRC).

-Z-
--
Regards,
DaveMart
Please see profile for equipment
 
People make a big deal of 12 bit vs 14 or 16 bit A/D, but unless
the sensor itself has enough dynamic range, it makes little
difference. (E.g. the Leica DMR crowd going on about how Canon is
practicing cost cutting by not putting more A/D resolution in the
1DsMkII. the DMR uses a CCD that might benefit from a couple extra
bits. The entire sensor design etc. has to be able to deliver the
bits.)
There should be little doubt that Canon's DSLRs can benefit from better A/D conversion. First, the well capacity of the pixels is likely over 30K electrons. Second, the fact that analog amplification through ISO boosting looks better than pushing in RAW conversion proves that there is extra detail that is missed by the 12 bit A/D converter.

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
You need a strong reason to give up pixel area, so I suspect it
will remain specialized for a while.
It would be interesting to know what sort of lithography various
manufacturers are using to manufacture sensors. My guess is that it
trails the leading edge for certain other types of chips by say two
generations. Adding more complexity to the per pixel circuit makes
more sense as lithography gets finer. Though at any given time, it
implies a more expensive sensor than one using a simpler circuit.
We know a little:

Foveon X3 (all?): 0.18 micron
Canon D30 & 1Ds: 0.35 micron

--
Ron Parr
Digital Photography FAQ: http://www.cs.duke.edu/~parr/photography/faq.html
Gallery: http://www.pbase.com/parr/
 
I have been using specialized astronomy CCDs for about a decade. I
can tell you that a lot of work has gone into working on noise
reduction there--and most of that technology flows down to
"consumer grade" sensors. ...
I'm aware of astronomy CCDs - I kinda follow astronomy - but I didn't
realize that that market was large enough to attract significant
development funds. On the other hand, those CCDs have been pretty
expensive up to now....
 
Sensor noise is like lens design, not like software or
microprocessor design. ....
I was comparing it to the evolution of magnetic storage,
which to me seemed to be a bit closer, with the problems
of distinguishing signal from background as the signal becomes
weaker with increasing areal density.
 
I think you just need to concentrate on what cameras are actually
producing, rather than making hopeful analogies with other
unrelated technologies. As Chuck Westfall said in this interview:
I don't have the foggiest idea why you have this notion that you have
some sort of right or mandate to determine what I should concentrate
on, or what I am "hoping" for.
 
Agree. It doesn't really matter which is better (Nikon or Canon) as long as the product quality keep improving :) If one of them being out of market, then this is a big big big bad new :)
 
I was comparing it to the evolution of magnetic storage,
which to me seemed to be a bit closer, with the problems
of distinguishing signal from background as the signal becomes
weaker with increasing areal density.
I misread your phrase "software, processors, and circuitry required to handle it" as generalizing beyond magnetic storage rather than indicating the scope of disciplines required to address increasing storage density. My apologies.

-Z-
 
I'm aware of astronomy CCDs - I kinda follow astronomy - but I didn't
realize that that market was large enough to attract significant
development funds.
http://www.fairchildimaging.com/main/ccd_area_595.htm -- 8cm x 8cm, 85 megapixels. For serious stuff, one would make a mosaic using more than one.

Astronomy applications are definitely pushing the boundaries in a number of ways. Quantum efficiency among them. E.g. 90% QE using back illuminated CCDs (monochrome of course). That on CCDs with very high fill factor. I'd expect the same chips are used for military applications.

-Z-
 
You can try it yourself by using the samples on this site. I did
with some of the ISO 1600 samples. The Fuji sensor is 7.8x4.1 mm,
the FF sensor is 36x24 mm. Crop a corresponding piece of a full
FF-image and compare to a full un-cropped image from the F10. Print
them both at the same size and see which you prefer.
BTW its not whisful thinking since I dont own a Fuji.
That makes sense to me at least... and I did own a F10 until it died 2 months ago in a downpour.

Chasseur d'Image tested both the Fuji S9500 and the new APS sized SOny, and they say that the image quality at 800 ISO is not that far apart, even surprisingly close considering how much smaller the Fuji sensor is.

Regards,
Bernard
 

Keyboard shortcuts

Back
Top