A suggestion to Olympus

I agree with all of this except for camera design. The coolpix 900 series showed one example of how digital allowed a unque camera design ... there is a lot kore innovation possible, esp at the por and prosumer end.

The 35mm brick evoived to hold a film canister, now it survives sort of like some feature of Darwin's finches that has beena dapted to other ends. A top down rethink would likely result in some (daring) new ideas in body features and design.

To me this is most evident in the limited use of interchageable viewfinders. It really makes no sense to be stuck with one choice ... a mirror or a prism, when so much more is possible.
 
OK, look at it this way. The image shifting prism on a Nikon
70-200mm AF-S VR is a virtual 120mm away from the sensor. It's
about 30mm in diameter. A rotation of just 0.48 degrees will shift
the image 1mm on the sensor. That's a movement of 0.125mm at the
edge of the disc, and (if I'm getting the moments of inertia right,
in my head) about same force as moving the sensor 0.06mm, assuming
the mass of sensor and prism are equal.
Well, even if the movement is small, that does not change the fact that this IS lens is HEAVY, and in my school book, F = m*a, and the accelleration to compensate for the movements are still there, even if the movements are small!
If you're reaching some "cruising velocity", you've obviously not
reached the design limit of the vibration reduction system.
A camera shake will consist of a period of accelleration and may also consist of a period of constant angular velocity. I feel that you are making up your own physics now to defend your original statments :-)

Geir Ove
 
and I'll maintain the position that exit pupil is not a "real world" problem on 1.5x and 1.6x crop Nikon, Canon, Pentax, or Minolta cameras.
I have no reason to disagree; a lot of the Olympus talk seems to be comparing to full 35mm frame format as the alternative. (I only took the 19º figure since it was mentioned in the post I was responding to). Beyond a certain degree of vignetting ("corner shading"), correction in software could increase noise problems, but I see no evidence that any "near APS" format gets to that point.

One lens might be an exception on visible corner shading, and with a camera with no correction software provided: the Canon 18-55 EF-S, with its approach of pushing the back elements closer to the focal plane than any other lens, in order to get down to 18mm "on the cheap". Do you have any data on its incidence angles and the resulting degree of corner shading?
 
Well, even if the movement is small, that does not change the fact
that this IS lens is HEAVY
But how heavy is the small portion of the lens that needs to be moved? IS, VR etc. certainly do not wiggle the big front element! Joe has mentioned Olympus patents on a similar technology in which the movement is done on a very thin, light, "miniscus lens".

I have a question; would it be feasable to do IS/VR with extra lens elements mounted in the camera body, between the lens mount and the mirror? Is it possible even with "flat" elements that do not significantly change the focal length, so that this could operate with pre-existing lenses?

P. S. That Olympus technology is not yet deployed in any product, but let us see what the mooted Digital Zuiko 125-250mm lens looks like, or the other two telephoto zooms in the Zuiko Digital lens roadmap for 2004-2005.
 
I don't know the weight of the lens they move, and that was my question to Joseph. However, glass is very heavy, and easily outweighs a ccd chip.

Geir Ove
Well, even if the movement is small, that does not change the fact
that this IS lens is HEAVY
But how heavy is the small portion of the lens that needs to be
moved? IS, VR etc. certainly do not wiggle the big front element!
Joe has mentioned Olympus patents on a similar technology in which
the movement is done on a very thin, light, "miniscus lens".

I have a question; would it be feasable to do IS/VR with extra lens
elements mounted in the camera body, between the lens mount and the
mirror? Is it possible even with "flat" elements that do not
significantly change the focal length, so that this could operate
with pre-existing lenses?

P. S. That Olympus technology is not yet deployed in any product,
but let us see what the mooted Digital Zuiko 125-250mm lens looks
like, or the other two telephoto zooms in the Zuiko Digital lens
roadmap for 2004-2005.
 
I don't know the weight of the lens they move, and that was my
question to Joseph. However, glass is very heavy, and easily
outweighs a ccd chip.
On the top of the CCD chip is a layer of glass, then two layers of Lithium Niobate crystal (about twice as dense as glass), and on some there's another layer of glass.

The CCD moves on a small printed circuit board, which usually contains the amplifier chips, and provides a landing spot for the flex circuit that connects it to the rest of the camera.

It's surprisingly heavy, and definitely in the same neighborhood as the prisms used for the lens based systems.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
Well, even if the movement is small, that does not change the fact
that this IS lens is HEAVY
But how heavy is the small portion of the lens that needs to be
moved? IS, VR etc. certainly do not wiggle the big front element!
Joe has mentioned Olympus patents on a similar technology in which
the movement is done on a very thin, light, "miniscus lens".

I have a question; would it be feasable to do IS/VR with extra lens
elements mounted in the camera body, between the lens mount and the
mirror? Is it possible even with "flat" elements that do not
significantly change the focal length, so that this could operate
with pre-existing lenses?
If you're steering light around with moving prisms, you want to be as far from the sensor as possible. The closer you get to the sensor, the larger an angle you need to move to get a particular change. If you needed to shift light 1mm on the sensor, and your 120mm away from it, you only need to divert the light 1/2 degree. If you're 1mm away from the sensor, you'd have to divert the light 45 degrees, a 90x larger motion of your steering prism.
P. S. That Olympus technology is not yet deployed in any product,
but let us see what the mooted Digital Zuiko 125-250mm lens looks
like, or the other two telephoto zooms in the Zuiko Digital lens
roadmap for 2004-2005.
Should be interesting. A 3:1, fast f2.8 with stabilization would be a real boon to the E system.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
To me this is most evident in the limited use of interchageable
viewfinders. It really makes no sense to be stuck with one choice
... a mirror or a prism, when so much more is possible.
I'm certainly not against innovation. But I question if there currently is much demand for these things. Most professionals are very comfortable with how DSLR cameras are held and how they work. I'm not sure what kind of actual improvements will be brought by a new design. I don't see it having much to do with the legacy of film but more to do with a convenient way to hold the camera, grip the lens and eye placement.

For instance, Rollei completely redesigned the 35mm camera some years ago in a camera that virtually nobody remembers or bought. I think the models were called the 2002 and 3003. It was sort of like a small Hassleblad with interchangeble backs (including Polaroid) and a unique viewfinder system that allowed you to view form the top or the rear. A beautiful design but who bought it?

Don't forget that some camera designs are so pure, versatile and elegant that they last a long time. Any current 4x5 view camera is something that would seem pretty familiar to a photographer from 1865! People stuck with Hasselblads for a long time. Many of the "innovative" cameras came and went. Especially on the consumer level - 126 Instamatic cameras, 110 designs and disk cameras to mention a few. All were to replace 35mm cameras for most consumers, yet 35mm designs adapted, got smaller and more conveient which allowed them to appeal to most consumers. Even the APS efforts have not done well. It's not as if manufacturers haven't been trying to change things.

As for viewfinders - Years ago, it was common for professional 35mm cameras to have replaceable viewfinders (Exacta, Praktica, Nikon F series, Topcon Super D, Miranda, Minolta XK, Canon F1 and others.) But the demand must not have been there because very few manufacturers offer this feature any more.

Current electronic viewfinders are not close to what you'd get from an optical finder. And all of the prosumer cameras that allow for live viewing trade off the speed of DSLRs for convenience. (DSLRs don't have to switch modes between viewing and shooting) So pros can't allways use this type of camera for fast moving or real spontaneous work. (I should know, I have an A2 and a 1Ds.) For instance, there is no way one could shoot sports with a camera like a Dimage A2. It just is not responsive enough and the black out time between pictures would be too disruptive to allow you to follow action.

I could see some manufacturer making a small video camera that can attach to the viewfinder and allow for remote viewing. But how much demand is there for it other than in special applications? The articulating LCDs that are on many digital cameras are not something that most pros are interested in. Although I occassionally find them handy for low level shooting. (Most Dslrs have the option of 90 degree angle finders as an accessory.)

I'm convinced that cameras will evolve in all different ways, but I don't think that it is driven by anger or frustration of current designs. More likely, as technology advances, that technology will be incorporated in new designs that will test the marketplace.

Personally, I think the Nikon Coolpix split design was a failure not a breakthrough. Nikon gave up on this some time ago.

The current group of 8 megapixel prosumer cameras show a lot of evolution in digital technology and camera design. I'm sure it will continue.

--
Alan Goldstein

http://www.goldsteinphoto.com
 
Well, even if the movement is small, that does not change the fact
that this IS lens is HEAVY
But how heavy is the small portion of the lens that needs to be
moved? IS, VR etc. certainly do not wiggle the big front element!
Joe has mentioned Olympus patents on a similar technology in which
the movement is done on a very thin, light, "miniscus lens".

I have a question; would it be feasable to do IS/VR with extra lens
elements mounted in the camera body, between the lens mount and the
mirror? Is it possible even with "flat" elements that do not
significantly change the focal length, so that this could operate
with pre-existing lenses?
If you're steering light around with moving prisms, you want to be
as far from the sensor as possible. The closer you get to the
sensor, the larger an angle you need to move to get a particular
change. If you needed to shift light 1mm on the sensor, and your
120mm away from it, you only need to divert the light 1/2 degree.
If you're 1mm away from the sensor, you'd have to divert the light
45 degrees, a 90x larger motion of your steering prism.
P. S. That Olympus technology is not yet deployed in any product,
but let us see what the mooted Digital Zuiko 125-250mm lens looks
like, or the other two telephoto zooms in the Zuiko Digital lens
roadmap for 2004-2005.
Should be interesting. A 3:1, fast f2.8 with stabilization would be
a real boon to the E system.

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
Joe

You seem to be good at searching for patents. See if you can find anything on the most elegant solution of all in my opinion - simply perform the "shifting" of the image in software. I understand how feasible it is will depend on the integration time for each pixel etc. - it will come at a cost of course - nothing is for free - however it certainly would be an elegant solution, perhaps even combined with some of the other approaches. The total exposure value for each pixel would be spread around the neighbouring pixels and would finally be reconstituted as one "whole exposure" later on. There are the issues of the Bayer colour grid also etc etc.

Anyway, elegant in principle.

ColesKIng
 
Joe

You seem to be good at searching for patents. See if you can find
anything on the most elegant solution of all in my opinion - simply
perform the "shifting" of the image in software. I understand how
feasible it is will depend on the integration time for each pixel
etc. - it will come at a cost of course - nothing is for free -
however it certainly would be an elegant solution, perhaps even
combined with some of the other approaches. The total exposure
value for each pixel would be spread around the neighbouring pixels
and would finally be reconstituted as one "whole exposure" later
on. There are the issues of the Bayer colour grid also etc etc.

Anyway, elegant in principle.
Yes, the most elegant, especially because the math for it is ready -- synthetic aperture radars, smeared and blurred picture reconstruction, etc.

BR
Alex
 
Synthetic aperture radars have nothing to do with image reconstruction. They merely focus (and move) their "beam" by altering the phase response of individual receivers in the antenna. Same thing exists for audio - you can use a couple dozen microphones arranged in a grid to make a spy microphone that will allow you to zero in on someone else's conversation a hundred yards away. I don't see how something like this can be applied to blurry images.

Deblurring algorithms won't always work either, because they don't produce a high quality picture. Usually the picture looks a lot less blurry, but the artifacts kill the usability of it completely.

There's one immutable law in all this. If information is lost, it is lost forever. You can't do anything to recover it.
 
I have a question; would it be feasable to do IS/VR with extra lens
elements mounted in the camera body, between the lens mount and the
mirror?
If you're steering light around with moving prisms, you want to be
as far from the sensor as possible. The closer you get to the
sensor, the larger an angle you need to move to get a particular
change.
As an alternative providing backward compatability with lenses, the trade-off I am curious about would be something like a thin lens in the body about 30mm (not 1mm) from the sensor, versus moving the sensor. I can see that it would require about four times as much movement as a moving element out in the lens 120mm from the focal plane, but still wonder how it would compete with the moving sensor approach, given the potential advantages of working with non IS/VR lenses and the lower cost and weight of not duplicating the mechanism in each lens.
... the mooted Digital Zuiko 125-250mm lens looks like
Should be interesting. A 3:1, fast f2.8 with stabilization would be a real boon to the E system.
And 125-250 is 2:1 actually! Even if those numbers (fram an alleged Olympus flyer in Japanese only) are not exact, the road-map suggests a similarly narrow zoom range for the longest proposed ZD zoom lens. That, plus the fact that all information from Olympus about future Zuiko Digital zoom lenses indicates zoom ranges of at most 3x should make you happy!

Maybe the initial 4x zoom approach is primarily to cover as much territory as possible with the small initial range of lenses, and for the "mid level amateur" sector's balance of convenience and total price against quality. Afer all, one 4:1 lens is worth twice as much as a 2:1 lens and 33% more than a 2.8:1 lens by the crude measure of focal lengths covered.
 
Personally, I think the Nikon Coolpix split design was a failure
not a breakthrough. Nikon gave up on this some time ago.
On the other hand, Sony continues to do a similar thing with its 828; pivoting between the lens/sensor part and the LCD/EVF part.

However, I do generally agree that a lot of the "wish list items" we read about ignore the facts that (a) Many of these ideas have been tried before and have mostly been abandoned. (b) Successful, competitive camera companies are not run by idiots who consistently overlook ideas that are obvious to numerous people in the internet forums, nor by an evil conspiracy that is out to deny us such great innovations; so it seems extremely likely to me that most such unfulfilled wishes have been considered by camera makers and judged not technically or economically viable, at least at present.
 
I agree with your comments. The manufacturers are not dummies and they have a long history of camera designs to draw from. Most photographers don't have any problem finding cameras that serve their needs. I doubt that anyone could come up with a "new" design that would be such a breakthrough that photographers would flood to it.
On the other hand, Sony continues to do a similar thing with its
828; pivoting between the lens/sensor part and the LCD/EVF part.
Don't you wonder why? I guess they are trying to maintain differentiation of their camera design. But it is much easier and generally more beneficial to make the screen articulated rather than split the camera. I'll give it that the split body lets you see most of the controls as well as the LCD from low and high angles. But a fully articulated screen can be used in the vertical orientation for odd angle shooting. The Sony 828 split body is pretty useless for low level vertical pictures. So why would it be considered a successful design? I don't see much advantage to it? The A2 has a tilting EVF and a tilting LCD but also falls down when used for verticals.

The perfect EVF would be one like on the A2 (it tilts) but also give it the ability to rotate for verticals. (My Rollei 6000 series 6x6 cameras had prism viewfinders that rotated 360 degrees for odd angle shooting.) The fully articulating LCD is already out there on many cameras so nirvana has been achieved in that department.

Why not allow the LCD screen or an EVF to pop out and connect via a cord or wireless system? Then the viewfinder could be anywhere. This should be easy to implement and might be handy at times. Or allow the lens and sensor to snap out from the body and work tethered.

But it's not like life as we know it on this planet will change just because a viewfinder makes it a little easier to occasionally shoot at odd angles. Well maybe it will keep your knees from getting dirty...

If you really want to give designers some ideas try to think of features or designs that can be added to cameras that will actually allow one to create more unique or better pictures.

If you look at what Canon has done with the 1Ds and the 1D MarkII, you'll see the type of steady progress that pros are looking for. Yes it is a traditional design, but the resolution is high and the cameras can work quickly (fast AF and high frame rate.) Additionally they really are taming the noise problem at high ISOs. A lot of pros may never need anything better than a 1D MarkII.

I for one am extremely happy with the 1Ds and could use it for the rest of my career if need be. But if its replacement model has higher resolution, a larger buffer and less noise at high ISOs I think that would about do it for my wish list. Eventually, all that and more will be in a smaller. lighter inexpensive package. Major progress is happening right now before our eyes. Just be thankful the manufacturers are so on the ball. You can bet a lot of people at Canon are already designing the 1D MarkIII.

--
Alan Goldstein

http://www.goldsteinphoto.com
 
This whole discussion about design and engineering merits sounds to me like the views of people who are concerned about engineering but have not spent much time actually shooting with stabilized lenses.

I'll puit aside the design difficulties and I don't care about the cost or manufacturing convenience of having one system in the body vs. a systems in each lens.

The advatage of the system in the body is that it will not disrupt one's view through the camera. Have you ever shot with an IS lens while hanging out of a moving helicopter? The way it locks on and unlocks can jerk the image and make you so dizzy that you feel like you are falling out. A freind of mine who is a top aerial shooter turns off his IS lenses and still uses a Kenyon gyro stabilizer on the camera. I am used to my IS lenses in most applications but for me and certainly others, there are times where just looking through the camera for a long time can get us dizzy or be distracting. Just standing on a ladder and shooting a building can make me unsteady.

So go for the in body sytem. This will be pretty tough for full frame Dslrs. (They'll have to make additional room in the body and use a bigger shutter.) And it will slightly throw off framing accuracy between what the groundglass sees and what the sensor sees. But we can live with that for fast moving work which usually can't be framed that critically anyway.

As for consumer cameras and prosumer models with EVFs, it's a no brainer

If manufactureres can't come up with a practical way to stabilize large sensors in D-SLR bodies then we'll just have to live with the current method. But there isn't much debate about which is a better solution form a working photographer's perspective.

I don't really care about how much dynamic force is required to move that much mass.

Alan Goldstein

http://www.goldsteinphoto.com
 
The math for this is most certainly ready. There are techniques available to find whether movement on a frame is due to local pixel movement or the whole frame movement.

Back to what I said before. Think of it this way. Take some number of closely spaced frames in time and then use them to extrapolate/interpolate to some point in time where all the frame movement will have been corrected for - i.e. holding the camera perfectly still.

ColesKing
 
Back to what I said before. Think of it this way. Take some number
of closely spaced frames in time and then use them to
extrapolate/interpolate to some point in time where all the frame
movement will have been corrected for - i.e. holding the camera
perfectly still.
Quite possible, just not at our current level of technology. Sensors and processors need to get a bit faster.

Right now, a really fast sensor and processor, say the one in the 1D mk2, can handle about 68 million pixels/second. Now, if you were using that for stabilized 4mp images (4mp is useful for a lot of things) you could get about 15/second. So the very most motion blur is what you'd ahve in 1/15 second exposures, anything bigger than that would be in the next frame, and offset to minimize its impact on the stack of frames.

If we were just 20x faster, we'd be operating at 300 frames/sec, 1/300 sec exposures, and meeting the handheld criteria for a 300mm telephoto.

I say 20x faster pretty casually, because I'm old. ;)

In Moore's law generations, 20x faster is only 6 years.

If you make it 350x faster than a good year 2003 camera, you can hit 24mp at 1000 frames/sec, which ough to pretty much wrap it up for the tripod industry. That's 13 years, and bye bye Gitzo, Manfrotto, Bogen....

And, just imagine what 25 years will bring. My current computer is about 10,000x faster than my first one from 1976 (1800 MHz vs. 0.7 MHz, 32 bit vs. 8 bit).

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
 
Back to what I said before. Think of it this way. Take some number
of closely spaced frames in time and then use them to
extrapolate/interpolate to some point in time where all the frame
movement will have been corrected for - i.e. holding the camera
perfectly still.
Quite possible, just not at our current level of technology.
Sensors and processors need to get a bit faster.

Right now, a really fast sensor and processor, say the one in the
1D mk2, can handle about 68 million pixels/second. Now, if you were
using that for stabilized 4mp images (4mp is useful for a lot of
things) you could get about 15/second. So the very most motion blur
is what you'd ahve in 1/15 second exposures, anything bigger than
that would be in the next frame, and offset to minimize its impact
on the stack of frames.

If we were just 20x faster, we'd be operating at 300 frames/sec,
1/300 sec exposures, and meeting the handheld criteria for a 300mm
telephoto.

I say 20x faster pretty casually, because I'm old. ;)

In Moore's law generations, 20x faster is only 6 years.

If you make it 350x faster than a good year 2003 camera, you can
hit 24mp at 1000 frames/sec, which ough to pretty much wrap it up
for the tripod industry. That's 13 years, and bye bye Gitzo,
Manfrotto, Bogen....

And, just imagine what 25 years will bring. My current computer is
about 10,000x faster than my first one from 1976 (1800 MHz vs. 0.7
MHz, 32 bit vs. 8 bit).

--
A cyberstalker told me not to post anymore...
So I'm posting even more!

Ciao!

Joe

http://www.swissarmyfork.com
One technique that should solve the bandwidth problem somewhat would be to accumulate light in the pixel up to the point where the camera (the inertial frame, which isn't inertial anymore) and then put in a marker here. Any additional light that then accumulates actually belongs to the surrounding pixel and so on. So, one photosite would have exposures stacked in it actually belonging to different parts of the picture. Later on they are separated and coherently assembled. The bandwidth issue disappears. The problem is the marker. I know too little about these things but I suspect a CMOS-type sensor or derivitave of those in current use (with some inherent on-chip logic and unfortuantaely noise) will do better at this.

See if you can search for some patents on this. As you pointed out in your reply technology advances quite fast and people posting here actually have access to stuff which is quite old (in the open media). The ability to implement this might be closer than what we think.

In one of the German Photo Magazines (FotoMagazin) the negative points of the "in-lens" stabilization systems - that they introduce vignetting of the image - has been discussed in the most recent issue.

Olympus/Kodak will know what to do.

ColesKing
 

Keyboard shortcuts

Back
Top