WARNING - technical thread - how are the red dots really made?

Started Sep 7, 2012 | Discussions thread
Arky Regular Member • Posts: 433
Re: WARNING - technical thread - how are the red dots really made?

Roland Karlsson wrote:

Why is the grid image so large?

This is very mysterious. The grid image is MUCH larger than the sensor grid itself. How come that a greatly enlarged image of the grid is focussed on the grid? We have just accepted this with a shrug, more or less in our earlier discussions. There was some suggestions that the micro lenses could spread the light rays. In an other forum there was a suggestion that it was diffraction in the grid. But. diffraction should result in rainbow colored stuff - if the light is not monochromatic.

Actually, I believe the DP1S sensor can somewhat image itself at 3 different magnifications.

Why is the grid image so sharp?

This is also a mystery. Light is hitting the sensor with a rather large aperture. Then the light that hits the grid should be spread at an angle of the aperture. This should not result in dots, if the IR filter is not very near to the sensor. But - how can the image then be enlarged?

Luck of the draw. The sharpness of the spots and grid are sensitive to small focus adjustments.

As you point out the reflected image is enlarged. I don't see reflections from the flat IR filter being the cause.

Why is the grid image always red, in all cameras?

Why the grid is red in the DP cameras we know. But - why also in all other cameras where it exists?

I think is a result of light passing through the IR filter multiple times.

Why do the camera manufacturers accept such a misfeature?

Yet a mystery. Why? Is it impossible to fix if you have a large sensor and a compact camera?

I think it's largely due to lens design and could be avoided.

  • Do anyone know the exact mechanism for the red grid?*

This is the main question. The red grid is not magic. So - someone designing digital cameras knows the exact answer to the red grid, and can even simulate it in his programs. So - there exist an answer. It would be extremely interesting to know that answer.

The grid and spot size are controlled by the focus movement, at least it is on my DP1S. At near infinity the spacing of the pixels and the magnification ratio align to generate the magnified large red pattern. Moving further away from infinty, the pattern blurs out and then refocuses into a sharper image that is obviously the sensor surface. Still moving the focus closer, the sensor surface image blurs out and then a very fine grid pattern appears.

The lens design must have a spherical element surface with a reflection that just happens to (almost) refocus the image of the sensor surface back on the sensor itself with a high degree of magnification. My testing suggests at least 3 lens surfaces (in the DP1S) can reflect a somewhat focused image of the sensor back upon itself at varying degrees of magnification. At least that's my theory.

Here's a DP1S shot, imaging a home-made 3 color laser collimator with the lens set to a mid focal position.

I posted a set of DP1S test images with 3 lens focus distances and 3 laser colors in this gallery.

I'm interested in any conclusions you can draw from examining these images of course.

 Arky's gear list:Arky's gear list
Sigma SD1 Merrill Sigma DP1s Canon EOS-1D Mark II Canon EOS 50D Sigma SD14 +14 more
Post (hide subjects) Posted by
(unknown member)
(unknown member)
Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow