This is NOT a troll

And has great d.o.f. for any given angle of view.
If I am not mistaken (and I easily could be), I think that the depth of field is greatly affected by the sensor size. IIRC, the depth of field for a smaller sensor will be much greater (a wider range "in focus") than the depth of field for a bigger sensor at the same f-stop and angle of view.

This has the apparent effect of making some shots sharper -- but I would greatly miss the creative effect of the more limited depth of field. I can make it sharper if I have the choice by stopping the lens down. If I need to, on my D30 I can increase the ISO sensitivity to keep the same shutter speed. But if, at the widest opening, I can NOT get the background to blur nicely when I want it no matter what I do, I see that as a HUGE negative factor.

Yes having a more limited depth of field is harder to work with at times, but the results when done well are worth the effort.

-lee-
 
Already in the Canon lineup we have three different sizes for SLRs: full frame 35mm for film, the 1.6x D30 and the 1.3x 1D.
I feel compelled to nit pick: there is another size, the APS film frame, which is just slightly different from the D30 (I think bigger). It is used on (at least) the EOS-IX and EOS-IX lite.
You think three different sets of lenses is an advantage?? I don't think so!
If you only owned one of those cameras it might be (lighter, maybe cheaper, except the economy of scale might make them more costly), but probably not.

Minolta invested a lot in making a whole line of lenses for their APS SLR, and it doesn't seem to have done much for them. At least Canon only lost the R&D from the two APS cameras (actually they recycled a bit of it into the D30, which I curse them for every time the D30 fails to AF lock on my dog, even with the 550EX's assist...).
 
I doubt the display devices in the latest and greatest Fabs. While cameras may seem like high volume, it is very small volume realative to the main stream semiconductor industry.

The process has to deal with the photodiode which I would imaging causes some processing issues. Any process difference from "standard" and nobody will let them near their shiny new semiconductor FAB.

Then there are the analog circuits. You have to remember that thes a huge analog designs. Most analog ICs have only a tiny part of the die being analog circuitry.

Karl
Joo
The cost of a device goes up with area and yield. Yield is a
function of manufacturing quality and area. As the area goes up,
then the probabably of a good die goes down. To a first
approximation, the die yield of a "perfect" die goes down with the
cube of the lLinear dimension.

As Andrew points out, the yield would be a lot higher IF we
accepted some dead pixels that get mapped/processed out (several
well know techniques for doing this). If a defect is an causes
and "open circuit" then there is usually not much harm. But if the
defect is a short, then it will probably be catastrophic. Then
there are "high current" (near short) defects will will probably
also be fatal. What I don't know these days is what is the
percentage of "short" verus "open" defects.

A big problem is that these are ANALOG (linear) devices. A digital
die, like most IC's today, can tolerate a good bit of current
leakage in one transistor and still work. In fact you can even
have some shorts and the device will work. But on an ANALOG die, a
current leak is much more likely to be fatal (the current leak
affects is neighbors more dramatically). I imagine that there are
design techniques that might be used to better isolate neighboring
cells in a CMOS design, but difficult to do in a CCD design (with
the serial movement).

Assuming the "dead" pixels did not affect their neighbors, we then
would get into criteria for a "bad die." LCD flat panel makers
have had these criteria for year (but are going away as quality
goes up). For example, we would probably not like pixels that are
clumped together to be bad.

It is correct that if the resolution of the sensors was high enough
that we could certainly tollerate a few bad, space apart pixels,
but I don't know (either way) whether the type of defects would
improve the yield of an "acceptable" die that much.

Karl
It's true that a silicon wafer costs a certain, fixed amount to
produce - a cost which depends on the process and not on the size
of the individual devices which will eventually be cut from it. So
the cost of a sensor will always scale with its area.

However, whereas on an LCD display, individual duff pixels are
highly annoying and unacceptable, that's not true of an image
sensor. As long as you know where they are (which can be easily
determined), you can map them out. The processing electronics on
the sensor (or possibly, but less likely, the camera) can ignore
the value read from that pixel and interpolate it from the
surrounding pixels instead. On an image with millions of pixels,
you'd never notice that a few were 'made up'.

This interpolation of dead pixels is standard practice on the small
sensors used as Web cams - there's no reason why it shouldn't be
just as useful on high quality equipment.

Andy.
I think one important factor is the cost of fabricating the silicon
in the 35mm size. I really want that size sensor, but semiconductor
companies cut their costs by shrinking down the size of a chip. In
addition, there is some chance that any particular square
millimeter of the chip will have a flaw that prevents it from
working, so the larger the chip the lower the production yield.
Since you're trying to build fairly large features on the chip
compared to microprocessor transistors, this is somewhat mitigated,
but nonetheless.

Most importantly, the larger the sensor the less sensors will fit
on a, say 8 inch wafer. And if the costs for producing such a wafer
are essentially fixed, the larger sensor will always cost
proportionally more than the smaller sensor.

All of this is to say we'll have a very long wait to have a 35mm
full-frame sensor camera priced like a Canon Rebel.
 
I'll bet that in 12-18 months, one of the major companies will be demonstrating a full-frame sensor at trade shows.
Pentax already did, about 12 months ago. The gave up trying to get it to market late last month though.

Forget 35mm, we should shoot for the full 8 inch circular wafer, crush medium format, give large format a run for it's money. :-) Lens prices might be a bit of a problem, and carrying the camera around...
 
Don't expect USM, IS, or fast, fixed aperture zoom lenses for quite a while.
I think they might have USM (like) lenses. Sigma does them (HSM, silent with full time manual) for Canon and Nikon lens mounts, as long as the new mount supports having the focus motor in the lens it should be doable for the new system.

I won't expect IS since none of hte players have a patent on their own version of that technology. I'm not sure about fast fixed aperture lenses, I guess that depends on what market they are targeting. Clearly they know how to make them, so if they are looking for pros they may well do it. If they are looking for hobbyists then they may not (at least not for a while).
 
I doubt you will get slammed for that...there are some great articles on the subject I wouldn't go as far as saying that digital is better then film period, however, if you are talking 35mm. You have an excellent point.

Have you seen this?
http://www.luminous-landscape.com/d30_vs_film.htm
One reason for the growing migration to digital (I'll probably get
slammed for this ) is the growing realization that digital
imaging is better than film.
 
The creative process that takes place while looking through the viewfinder of a true SLR was lost when holding the 990 in front of me, squinting at the LCD. LCDs do not have the resolution or dynamic range needed to show the nuances of the image coming from the lens on a 35mm camera.
The original poster didn't want to replace the normal view finder (indeed they were talking about EVF only working with the mirror locked up). One would use this some place where the normal viewfinder is hard to use. Like macro photography, or some street photography...or crowds where you have to hold the camera over your head.

Using it all the time would cause all sorts of problems, like the sensor not working as well in low light as my eyes...
This is why no serious photoshop user uses an LCD monitor. Even the 19 inch variety can't compare to a standard monitor for sharpness, color saturation, dynamic range - all those things that would be necessary for an EVF to be equivalent to looking through the lens.
Um, what? LCDs are much sharper then CRTs. Really. You can get better color saturation, but the dynamic range is more limited (at least currently). LCDs also shift less the CRTs over time. Good ones are just way too amazingly more costly then CRTs though.
So if a 19 inch LCD can't do it, how can a 2 inch LCD?
Not needed. EVF on a SLR is for when you can't use the normal viewfinder, so any view is better then none. Even the after the fact view of the D30 is better then the no view on (say) the ELAN.
You're just adding another level of technological complexity that is only going to drive the price higher and not accurately represent the image coming from the lens.
Yes. And is only sometimes useful. When you need it though, you do need it.
Let me guess, you come from the P&S world?
Didn't we all at some point? :-)
 
I definitely don't want an EVF on my future digital SLR - the
pentaprism that lets me see exactly what the lens sees, with no
electronic interpretation, is all I want. Do you want your glasses
to show you an electronic view of the world around you? Of course
not, you want to see everything exactly as it is.
Don't make me laugh! The whole point of using a DIGITAL camera is to achieve an "electronic interpretation" of the world around you! Why not use some of those electronics to make sure the "interpretation" is as accurate as possible?

The pentaprism isn't capturing the image for you. The CMOS or CCD is.

JCDoss
 
Which is exactly why I said that I would think they would want to use the largest manufacturing process, i.e. oldest fabs that produce the largest gates. Larger gates are older tech.

Joo
The process has to deal with the photodiode which I would imaging
causes some processing issues. Any process difference from
"standard" and nobody will let them near their shiny new
semiconductor FAB.

Then there are the analog circuits. You have to remember that thes
a huge analog designs. Most analog ICs have only a tiny part of
the die being analog circuitry.

Karl
Joo
The cost of a device goes up with area and yield. Yield is a
function of manufacturing quality and area. As the area goes up,
then the probabably of a good die goes down. To a first
approximation, the die yield of a "perfect" die goes down with the
cube of the lLinear dimension.

As Andrew points out, the yield would be a lot higher IF we
accepted some dead pixels that get mapped/processed out (several
well know techniques for doing this). If a defect is an causes
and "open circuit" then there is usually not much harm. But if the
defect is a short, then it will probably be catastrophic. Then
there are "high current" (near short) defects will will probably
also be fatal. What I don't know these days is what is the
percentage of "short" verus "open" defects.

A big problem is that these are ANALOG (linear) devices. A digital
die, like most IC's today, can tolerate a good bit of current
leakage in one transistor and still work. In fact you can even
have some shorts and the device will work. But on an ANALOG die, a
current leak is much more likely to be fatal (the current leak
affects is neighbors more dramatically). I imagine that there are
design techniques that might be used to better isolate neighboring
cells in a CMOS design, but difficult to do in a CCD design (with
the serial movement).

Assuming the "dead" pixels did not affect their neighbors, we then
would get into criteria for a "bad die." LCD flat panel makers
have had these criteria for year (but are going away as quality
goes up). For example, we would probably not like pixels that are
clumped together to be bad.

It is correct that if the resolution of the sensors was high enough
that we could certainly tollerate a few bad, space apart pixels,
but I don't know (either way) whether the type of defects would
improve the yield of an "acceptable" die that much.

Karl
It's true that a silicon wafer costs a certain, fixed amount to
produce - a cost which depends on the process and not on the size
of the individual devices which will eventually be cut from it. So
the cost of a sensor will always scale with its area.

However, whereas on an LCD display, individual duff pixels are
highly annoying and unacceptable, that's not true of an image
sensor. As long as you know where they are (which can be easily
determined), you can map them out. The processing electronics on
the sensor (or possibly, but less likely, the camera) can ignore
the value read from that pixel and interpolate it from the
surrounding pixels instead. On an image with millions of pixels,
you'd never notice that a few were 'made up'.

This interpolation of dead pixels is standard practice on the small
sensors used as Web cams - there's no reason why it shouldn't be
just as useful on high quality equipment.

Andy.
I think one important factor is the cost of fabricating the silicon
in the 35mm size. I really want that size sensor, but semiconductor
companies cut their costs by shrinking down the size of a chip. In
addition, there is some chance that any particular square
millimeter of the chip will have a flaw that prevents it from
working, so the larger the chip the lower the production yield.
Since you're trying to build fairly large features on the chip
compared to microprocessor transistors, this is somewhat mitigated,
but nonetheless.

Most importantly, the larger the sensor the less sensors will fit
on a, say 8 inch wafer. And if the costs for producing such a wafer
are essentially fixed, the larger sensor will always cost
proportionally more than the smaller sensor.

All of this is to say we'll have a very long wait to have a 35mm
full-frame sensor camera priced like a Canon Rebel.
 
The oldest fabs also don't have as good a defect density and have larger particles. Generally, going to older fabs is not the way to make bigger die with good yields.

Karl
Which is exactly why I said that I would think they would want to
use the largest manufacturing process, i.e. oldest fabs that
produce the largest gates. Larger gates are older tech.

Joo
 
Doug,

I don't think you know what is possible with "in the eye" viewfinders. You seem to be limiting your thinking in terms of direct view LCD's like are on laptops and the back of cameras and the small camcorder viewfinders. Direct view and LCD on Glass display DO have severe resolution and color limitations, but there are other technologies that are being developed.

With optics in front of a high resolution display,), you can make a very high resolution display that appears to be any size you want (as a practical limit with human vision, one is limited to about a 17 to 19inch monintor viewed at 16 inches -- beyond that, your eyes start panning to see the whole image). A technology I am very familiar with is LCD on Silicon (LCOS) that can provide very high resolution in the eye displays. In addition you could have "critical focus areas" that are magnified for telling focus.

Direct view backlit LCD panels do have severe problems color. LCOS panels lit by LEDs can produce a greater dynamic range with better color fidelity. The Digimage5/7 uses a low resolution LCOS display made by Displaytech. You can expect in the not too distant future to see much higher resolution LCOS displays. With these displays it will be possible to give a good idea of what the final image output will be. These high resolution viewfinders will probably cost less than $30 to manufacture in high volume. While this adds cost, you can eliminate the optics and complexities of the mirror and pentaprism. Removing the mirror means that one can move the optics closer to the sensor which is good for designing wide angle lenses.

I fully expect that in a few years EVF will be the norm, even in higher end equipment. Yes there will still be some hold outs, and there will still be a few people shooting film. But the advantages of the EVF will eventually win out as the cost of the technology comes down.

Karl
Your argument is a trap. Displaying only what the sensor sees is
not going to push manufacturers to produce better sensors.

There is no reasonably priced technology available that will allow
an EVF to display EXACTLY what the lens sees - you only get a crude
representation. I don't want to see a tumbnail of the image from
the lens displayed on some tiny LCD screen. This is on
I trash my Nikon 990 in favor of the D30. The creative process that
takes place while looking through the viewfinder of a true SLR was
lost when holding the 990 in front of me, squinting at the LCD.
LCDs do not have the resolution or dynamic range needed to show the
nuances of the image coming from the lens on a 35mm camera. This is
why no serious photoshop user uses an LCD monitor. Even the 19 inch
variety can't compare to a standard monitor for sharpness, color
saturation, dynamic range - all those things that would be
necessary for an EVF to be equivalent to looking through the lens.
So if a 19 inch LCD can't do it, how can a 2 inch LCD?

You're just adding another level of technological complexity that
is only going to drive the price higher and not accurately
represent the image coming from the lens.

Let me guess, you come from the P&S world?
 
Don't make me laugh! The whole point of using a DIGITAL camera is
to achieve an "electronic interpretation" of the world around you!
Why not use some of those electronics to make sure the
"interpretation" is as accurate as possible?

The pentaprism isn't capturing the image for you. The CMOS or CCD is.

JCDoss
Actually, the whole point of a camera, regardless of the medium involved is to capture MY interpretation of the image, whether I want to capture it as close to reality as possible or tweak the image in some way. Whether it is captured with a sensor or film will only dictate how the image is processed.

But I still want to see the scene EXACTLY as it appears before me, w/o any jerky, jumpy LCD screen interpreting it for me.

Mark
 
But I still want to see the scene EXACTLY as it appears before me,
w/o any jerky, jumpy LCD screen interpreting it for me.
Well, the idea I put forth earlier was a camera with both optical and electronic viewfinders in place. I agree that in principle the optical viewfinder is the way to go... except in a handful of circumstances. It's at these times when having a "backup plan" would be helpful. And this will be especially true when the technology has matured, and the "jerky, jumpy" images are relegated to the past.

JCDoss
 
I've just tried an interesting experiment to determine the effect of a number of dead pixels on image quality that I'd like to share.

Start by loading a high quality image into Photoshop. Create a new layer above the background and flood it with white. Select this layer and use Filter> Noise> Add Noise to add some uniform, monochromatic noise to the layer.

Now use Image> Adjust> Threshold to leave a mostly white layer with some number of (probably isolated - I'll get to this later) black pixels. Use the magic wand to select them, then Select> Inverse to leave just the black pixels selected.

Copy the black pixels to a new layer, re-select them using the same method as before, then make the mostly white layer invisible. You may want to use 'hide edges' to make the display clearer. You should now see the original image with black specks all over it, which represent simulated dead pixels.

Now, remembering that only the dead pixels are selected, flatten the image and use Filter> Noise> Dust & Scratches, with a threshold of zero and a radius of 1. This causes each of the dead pixels to be replaced by the average of its neighbours. In my experiment, with (probably) a few thousand dead pixels, it was viatually impossible to tell the difference between this image and the original.

As a further test, I then tried copying the black-dots-on-white-background layer to a new image, and enlarging it by 50% before copying it back. This gives groups of clustered dead pixels, and ultimately shows that these are much harder to deal with - you may want to play with the 'radius' parameter in the dust and scratches filter to achieve the best result. Nevertheless, a few clumps of dead pixels are hard to spot unless you know where they are and deliberately zoom in on them. (I'm inclined to agree many such clumps of dead pixels might well render the sensor unusable).

I note, incidentally, that in the full review of the D30 on this site, that strange 'dot' artefacts appear on the D30's resolution chart - maybe that's due to there being a few dud pixels on the review sample, that can normally be averaged out? The resolution chart would be particularly unforgiving, of course. I've seen a couple of similar artefacts on pictures I've taken myself too.

Andy.
Adrew, You are about half right too.

The cost of a device goes up with area and yield. Yield is a
function of manufacturing quality and area. As the area goes up,
then the probabably of a good die goes down. To a first
approximation, the die yield of a "perfect" die goes down with the
cube of the lLinear dimension.

As Andrew points out, the yield would be a lot higher IF we
accepted some dead pixels that get mapped/processed out (several
well know techniques for doing this).
It is correct that if the resolution of the sensors was high enough
that we could certainly tollerate a few bad, space apart pixels,
but I don't know (either way) whether the type of defects would
improve the yield of an "acceptable" die that much.
 

Keyboard shortcuts

Back
Top