D-SLR Maginfication Fallacy

If there is only one defect dot on the silicon disk, how many
imager chips are likely to be defective? One out of nine large
imagers (11 % defective) and one out of 25 small imagers (4%
defective).
At what stage of processing/manufacture is the "defect dot" discovered?

A. Find dot on disk, DON'T USE that section of disk for further "slicing" etc. (save process-expense)

B. Find "defect dot" on imager AFTER all sliced-"imagers" are (expensively) processed,...discard bad imager -- absorb "wasted" costs into sale price of remaining "good" imagers.

Don't have a clue about such things, ..just trying to understand.

Larry
 
Mastrianni,

In an effort to acknowledge the level-of-civility USUALLY maintained in the exchanges-of-often-differing-opinions that occur in these threads, I just want to say that I (and I am sure many others) would not have chosen the "utter rot" characterization of your thoughts.

That kind of denegration adds nothing to the validity/lack-of-validity of the ideas presented, or the position taken.

I'm sure many of us say things at times, in the 'heat" of the moment, that, were we to reflect on it a little, we would say differently, or not-at-all.

It's almost surprising how lengthy and "nit-picky" we can be, pursuing some of these "exploratory" conversations, and still manage a modicum of patience with one-another.

Hats-off to all who make the attempt! :-)

Larry
For a particular lens;
The DOF characteristics remain the same.
The minimum focusing distance remains the same.
All intrisic characteristics (MTF, barrel distortion, etc.), remain
the same.
The only things that change are;
Field of View
Magnification (because minimum focusing distance REMAINS THE SAME)

I know it seems confusing because people continue to refer to a
50mm as an 80mm, but it is only an 80mm in terms of FIELD OF VIEW!

Whew. Hope this helps. Phil is correct in referring to it as a
cropping factor.

Everyone continues to harp on how bad this is for wide angle. But
it really can be turned to an advantage. For instance, if you want
to do a wide angle near/far shot (main subject is very close to
lens, with panoramic view of background), you can actually fill up
more of the frame with near subject because your minimum focusing
distance DID NOT CHANGE. Creative opportunities abound. I don't
know anyone who thinks medium format is restrictive (lens wise),
because an 80mm on that format really behaves differently than an
80mm on a 35mm camera. You must just think differently for that
particular tool, and take advantage of each one's optical
idiosyncracies.

Matti,
The above was my original response, before we veered off topic.
Additionally, I responded to the questioning of my methods and
imaging philosophy. It is not necessary for you to agree with those
philosophies, and everyone who would like to continue to apply
every iota of theory, valid as it may be, to every shot they take,
is more than welcome in my book to do so. They will hear no
argument from me. Some are of the bent that no portrait can be
taken without using a medium tele. I happen to disagree. Some
believe you cannot do landscape without ultra wide angle, or large
format. I disagree. You will note in the last paragraph that I
acknowledge the differences in format size, but choose to take
advantage of it as such, and disregard any supposed limitations.
But then, that's my askewed philosophy. As another poster said to
me, "that's utter rot". So be it.
Sincerely
Mastrianni
 
Things are getting interesting, again :) I didn't quite understand
what you are trying to say and mean with this "telephoto
compression". And like you said: "moving the camera at the same
time". This is again the key point: changing the distance changes
the perspective, nothing else.
OK a movie camera on a track and the camera moving and zooming can leave the subject exactly the same size in the frame. What happens – in one direction the background rushes forwards and threatened to crash into the subject – the subject does not move the perspective on the subject does not change, angle of view does not change yet what happens to the background?

Reverse the procedure to wide angle and a vista opens out behind the subject, the perspective on the subject does not change yet the background drops away and a vast amount of information is included within the frame – the subject has neither changed perspective or size in the frame in either example – so just what happens?
 
That only works if the photo is perfectly squared. If there is any angle, you'll end up digging a hole in the ground to get the same perspective.
People should note that in the top set of picture it shows how the
perspective is the SAME with the crops enlarged to the same area
shown below each picture.

In the second set of pictures, the forground subject is kept the
same size via MOVING the camera and changing the focal length.

Karl
Take a look at Ansel Adams Photographic Techniques Book I, pages 84
  • 85.
Here, it is quite clear that if you zoom in or out at the SAME
POSITION the crop will be identical throughout the image.

If you CHANGE POSITION YOU CHANGE THE IMAGE, the subject may fill
the same space, but the perspective, the way the subjects relate to
each other in the image WILL BE DIFFERENT.

But dont take my word for it, take Ansel's:



So please, People, take a close look at what you "know". Yes it may
seem that you "get closer" but you dont, you just get a smaller
picture.

Dont think you can go to your local zoo with a 300mm lens and be
able to "see" what a 450mm does. The animals will be just as far
away through the view finder.

Good luck, I know its tough getting this all sorted out, but dont
distress as many pros dont have it "down" completely yet either.

johnny
 
That's nuts. The base material is so cheap I have problems believing Nikon is saving us all money by using less silicon, one of the most abundant elements on the planet.
Cut silicon disk into, let's say, nine large area imager chips.

And now cut the same size silicon disk into 25 small area imager
chips.

If there is only one defect dot on the silicon disk, how many
imager chips are likely to be defective? One out of nine large
imagers (11 % defective) and one out of 25 small imagers (4%
defective).

If four imager chips on both discs are affected, 44 % vs. 16% of
the chips are wasted.

Defectives rate combined with the total output of the single
silicon disk, nine against 25, the yield from the production is
highly different and so is the cost of single imager.

Cheers,
Matti J.
Like I said before, the industry made a mistake by not
standardizing early. We all get to pay for it now by buying 15mm
lenses to be 22mm lenses. I remember just 10 years ago that a 15mm
was a rental lens for most people.
I'm with you. My knowledge of such things is so limited that I tend
to take a simplistic view.

Think "tile-floor". Bigger floor? Add more tiles!

What's the big deal? (Must be one, ...but I sure don't know what it
is :-)
Larry
Anyone able to explain why digital sensors must be so small? Not
just the factory spiel, but some reason why you can't cover a 35mm
frame with pixels.
 
Johnny,

I hate to say it, but the example you give doesn't illustrate the effect. You need a perspective that can't be replicated back sliding back or forth. You need to shoot up or down at something. If the camera is square on all three planes to the ground, you can get the same photo with nearly any lens and cropping (considering the neg will hold up).

In that situation Karl is correct.

Shoot a building closely and upward with a wideangle and then try to replicate that by digging a hole at an angle to the building to use a normal lens, then you are correct.
Take a look at Ansel Adams Photographic Techniques Book I, pages 84
  • 85.
Here, it is quite clear that if you zoom in or out at the SAME
POSITION the crop will be identical throughout the image.

If you CHANGE POSITION YOU CHANGE THE IMAGE, the subject may fill
the same space, but the perspective, the way the subjects relate to
each other in the image WILL BE DIFFERENT.

But dont take my word for it, take Ansel's:



So please, People, take a close look at what you "know". Yes it may
seem that you "get closer" but you dont, you just get a smaller
picture.

Dont think you can go to your local zoo with a 300mm lens and be
able to "see" what a 450mm does. The animals will be just as far
away through the view finder.

Good luck, I know its tough getting this all sorted out, but dont
distress as many pros dont have it "down" completely yet either.

johnny
 
Hi, Karl,

See response to Chris, below.

bradley phillip
 
"Hole digging" now!
I love it! :-)

Just goes to show you that no matter how "dead" some think these thread-horses are, someone will STILL think of a new way to make a point!

Larry
I hate to say it, but the example you give doesn't illustrate the
effect. You need a perspective that can't be replicated back
sliding back or forth. You need to shoot up or down at something.
If the camera is square on all three planes to the ground, you can
get the same photo with nearly any lens and cropping (considering
the neg will hold up).

In that situation Karl is correct.

Shoot a building closely and upward with a wideangle and then try
to replicate that by digging a hole at an angle to the building to
use a normal lens, then you are correct.
Take a look at Ansel Adams Photographic Techniques Book I, pages 84
  • 85.
Here, it is quite clear that if you zoom in or out at the SAME
POSITION the crop will be identical throughout the image.

If you CHANGE POSITION YOU CHANGE THE IMAGE, the subject may fill
the same space, but the perspective, the way the subjects relate to
each other in the image WILL BE DIFFERENT.

But dont take my word for it, take Ansel's:



So please, People, take a close look at what you "know". Yes it may
seem that you "get closer" but you dont, you just get a smaller
picture.

Dont think you can go to your local zoo with a 300mm lens and be
able to "see" what a 450mm does. The animals will be just as far
away through the view finder.

Good luck, I know its tough getting this all sorted out, but dont
distress as many pros dont have it "down" completely yet either.

johnny
 
Karl, Chris,
Depth of field is a big can of worms and it's difficult to get
consensus. There are many sources of confusion because it makes a
difference whether you talk about the aperture size relative to the
lens (i.e. "f-numbers"), or absolute aperture, and the size you
enlarge the print by. People have done the arithmentic and
mathematics before in these forums, but in the simple case you care
correct.
We are in agreement with the definitions of terms. It is a complicated subject, but the root of the disagreement doesn't lie here.
2) To get exactly the same "telephoto compression", or let me put
it more preciseley-- to get exactly the same perspective between
objects at various distances in your image, you'd have to stand in
different places with the two different focal lengths.
This is wrong. Perspective has nothing to do with the focal length
of the viewing device. It is a function of where the veiwer stands,
and nothing else.
Here is where I think the crux of our difference of opinion is.

If I am wrong about this (certainly possible), then I agree with both of you; perspective is simply where one stands, and so-called "telephoto compression" will exist to the same degree in every image, regardless of focal length.

But my understanding is that focal length does , in fact, alter perspective-- the apparent relative sizes of objects in the scene.

I'm interested in learning the truth, so I'm going to look it up to see and let you know what I find.

bradley phillip
 
Bigger is more difficult and more expensive. Smaller has lower defect rates (higher yield) and permits amortizing wafer cost over more sellable units. Smaller is easier to package (bond), mount, quality test, and easier to sell in OEM volume (increasing supply) or by outsourcing manufacturing to a vendor with low labor costs.

In the computer industry, smaller is better. In the imaging industry, bigger is better. Because imager size is a critical physical property, the imaging electronics industry follows a different cost evolution than the computer industry.

The computer industry too makes big chips -- Intel makes Xeons, Sun makes Ultra Enterprise CPUs. Take a look at the prices of top-tier CPUs sometime and you'll think the D30 is a bargain.
The limited understanding of micro-electronics I have is: smaller
is more expensive and harder to do. I understand that each pixel
well must capture detailed enough info to facimilate film. But
that only makes me wonder why not 8 million instead of 5 to get the
right size?
 
If there is only one defect dot on the silicon disk, how many
imager chips are likely to be defective? One out of nine large
imagers (11 % defective) and one out of 25 small imagers (4%
defective).
At what stage of processing/manufacture is the "defect dot"
discovered?

A. Find dot on disk, DON'T USE that section of disk for further
"slicing" etc. (save process-expense)
It doesn't work like that. The cost is in setting up the machinery for the run, producing the highly pure and polished wafers (disks), etc. in the first place. Assuming the disk is going to go through the machinery, there's pretty much no financial penalty in making a duff chip on part of it. The cost is per wafer, and the more working chips you can get from a wafer the better. You can't really not make selected dice on a given wafer because that wopuld involve setting up the run for each wafer individually. That would be ridiculously expensive.

I believe they're typically tested before the wafer is sawn up anyway.
B. Find "defect dot" on imager AFTER all sliced-"imagers" are
(expensively) processed,...discard bad imager -- absorb "wasted"
costs into sale price of remaining "good" imagers.
This is what happens - the bad dice are discarded, but the expense in processig is nothing to do with sawing up the wafers or testing the devices. It's pretty much all upfront. There is nothing you can really do to mitigate this once the run is ready to go.
 
That's nuts. The base material is so cheap I have problems
believing Nikon is saving us all money by using less silicon, one
of the most abundant elements on the planet.
It has nothing to do with the cost of the base material. That is, as you point out, essentially free. It is expensive to make big chips because the yield is low, as Matti pointed out.

Semiconductor fabriaction plants are amongst the most expensive things to build on the planet. They cost billions of dollars, and if a company gets it wrong they go bust. The machinery inside them is precise to a degree which you wouldn't believe. The devices produced are the most finely engineered things in existence, and they have to be accurate to small numbers of nanometres. Setting this machinery up for a production run costs a fortune too (not least in the downtime of the fab).

Now the company which built the fab has to recoup those billions of dollars, and the only way it can do this is through the sale of the devices it produces. The cost per wafer is essentially fixed. If you want to make a big, low yield imager chip which produces, say, ten good devices per wafer, you are going to have ten times as much per device as for a small chip with a yield of a hundred good devices per wafer.
 
No, sorry-- by empirically, I mean I'm not providing a proof-- I'm only looking at a specific case (50mm, in this case) and drawing generalizations from that.

The angles were figured out mathematically, of course. ;)

bradley phillip
 
Earlier in this thread, I stated that comparing an image from a lens on a D30 and an an image from a lens with 1.6x the focal length on a 35mm camera would yield images with the following characteristics:

1) D30 image would have same field of view
2) D30 image would have more depth of focus
3) D30 image would have a different perspective (less telephoto compression)

Chris (and Karl) pointed out that #3 was not true; I researched it and found that they are correct.

The D30 image would have the same telephoto compression. (I agreed earlier that there is really no such thing as "telephoto compression"; merely relative sizes of (and apparent distance between) objects in a scene, aka perspective, combined with magnification).

It turns out that, as Chris said, as long as you don't change your (actual) viewpoint (i.e. move), you'll get exactly the same perspective regardless of the focal length you use. Different focal lengths will magnfiy the scene to differing degrees, but the perspective remains exactly the same.

So the long and short of it is, I was wrong about #3, and that you'll get the same perspective regardless of focal length.

bradley phillip

-----
Background:

If any of you are still interested (and judging by the length of this thread, there may be one or two... ;), here is an exerpt from a link I found:

Those 35mm camera lenses that range from about 85mm to 135mm are good for shooting pictures of people. They allow you to shoot from about 6 feet away and still fill the frame with the subject’s face. Six feet from the subject is a good working distance. It is not too close for comfort, and it is not so far away that intimacy is lost. Telephoto compression is the apparent compression of perspective. A telephoto lens does not compress perspective; it only appears that way! Remember, perspective does not depend on the lens being used, but on the position of the camera.

So then, how does a telephoto lens produce the effect of compressed perspective? Several factors are involved:

A telephoto lens is used from farther away to obtain the same size image that would be produced by a shorter lens at a closer distance. The more distant camera position produces a flatter perspective. But, because the long lens magnifies the subject, it still produces a normal size image. Thus the looks are flatter than expected.

The distance from which the print is viewed also has an effect. An X-times enlargement should be viewed from X-times the focal length of the lens used to make the picture in order for the perspective to appear natural. Therefore, a 6X enlargement of a negative shot with a 50mm lens should be viewed from 6X 50mm = 300mm or 12 inches, while a picture made with a 500mm telephoto lens and enlarged 12 times should be viewed from20feet(12 x 500mm=600 x 0.04 = 240 ¸ 12 = 20 feet). (Note: To convert millimeters to inches, multiply the known millimeters by 0.04.)

You can find out more about focal length, perspective and magnification at:

http://www.sweethaven.com/academic/lessons/021500/110/default.asp?unNum=5&lesNum=6

and

http://www.sweethaven.com/academic/lessons/021500/110/default.asp?unNum=1&lesNum=8

Thanks, Chris, I learned something.
 
OK a movie camera on a track and the camera moving and zooming can
leave the subject exactly the same size in the frame. What happens
– in one direction the background rushes forwards and threatened to
crash into the subject – the subject does not move the perspective
on the subject does not change, angle of view does not change yet
what happens to the background?
1. the angle of view changes, as we move closer or away from the subject, it's the zooming we are doing to keep magnification and framing the same. And zooming means change in FOV 2. The perspective changes as we move closer or away. Perspective is about the ratio of distances between observer and various subjects: allthough the distance between subject and background remains the same, the distance between camera and the subject changes and this alters the perspective. And this alteration in perspective causes this "odd" behaviour of the background, and it is called perspective, not telephoto compression.

The same happens if subjects in scene move or if the camera moves. Take a zoom lens on your camera and try this yourself: first zoom without moving and zoom and move at the same time to keep magnification the same: you'll see how focal lenght doesn't alter perspective, but the moving.

Severi
 
Reverse the procedure to wide angle and a vista opens out behind
the subject, the perspective on the subject does not change yet the
background drops away
Er, the background dropping away is a change on perspective.
 
I'm sorry, but I don't think you know what you are talking about. Light moves in straight lines (except to the effect that space is warped by gravity per Einstein). Because of this, perspective is purely a result of the distance between the camera and the objects in the scene.

Karl
People should note that in the top set of picture it shows how the
perspective is the SAME with the crops enlarged to the same area
shown below each picture.

In the second set of pictures, the forground subject is kept the
same size via MOVING the camera and changing the focal length.

Karl
Take a look at Ansel Adams Photographic Techniques Book I, pages 84
  • 85.
Here, it is quite clear that if you zoom in or out at the SAME
POSITION the crop will be identical throughout the image.

If you CHANGE POSITION YOU CHANGE THE IMAGE, the subject may fill
the same space, but the perspective, the way the subjects relate to
each other in the image WILL BE DIFFERENT.

But dont take my word for it, take Ansel's:



So please, People, take a close look at what you "know". Yes it may
seem that you "get closer" but you dont, you just get a smaller
picture.

Dont think you can go to your local zoo with a 300mm lens and be
able to "see" what a 450mm does. The animals will be just as far
away through the view finder.

Good luck, I know its tough getting this all sorted out, but dont
distress as many pros dont have it "down" completely yet either.

johnny
 
No, sorry-- by empirically, I mean I'm not providing a proof-- I'm
only looking at a specific case (50mm, in this case) and drawing
generalizations from that.

The angles were figured out mathematically, of course. ;)
Oh, that's really easy to explain then. You probably used an "ideal" lens to simplify the calculation - light rays shoot out from one point in the center, and the angle is arctan(y/x). If you do this with a real life lens, you end up with (among other things) pincushion distortion. The simplest way to think of it is if this lens focuses light at a distance X from the center of the lens, the focal plane is not really a plane, it's the inside of a sphere. If you curled up the edges of the film to lie on this sphere, it would intercept light at an angle further off-center than when the film is lying flat. The effect becomes more pronounced the further off-center the angle is.

Since camera film, CCDs, and CMOS chips are flat, real life lenses have lots of optical elements to correct for pincushion distortion (tedency for angles to become exaggerated near the corners, which are furthest from the center). That's part of the reason why good lenses are so damned expensive. Once you correct for the pincushion distortion, the 1.6 f.l. multiplier should result in exactly a 1.6x smaller field of view.
 
Yes I used a simplified idealized lens.

I'm certainly not up to doing the math on a 15-element 13-group highly corrected optic, so I thought I'd look into what you've said to see on a real-world lens...

The only lenses Canon makes that are close to 1.6x different in focal length are the 85mm and the 135mm. (1.589x).

Canon 85mm: FOV: 28.5 degrees:
x1.6= 136mm w/FOV: 17.8 degrees
x1.589 = 135mm w/FOV: 17.94 degrees

Canon 135mm FOV: 18 degrees
1.6 = 84.4mm w/FOV: 28.8 degrees 1.589 = 85mm w/FOV: 28.59 degrees

That's tantalizingly close-- I'm willing to bet the discrepencies are due to marketing roundoff in the Canon specifications more than anything else.

Thanks for the insight, John! (You also answered a mystery about flat vs. spherical focal planes).

bradley phillip
 

Keyboard shortcuts

Back
Top