How can I relate pixel size to field of view?

mister sunshine

Well-known member
Messages
193
Reaction score
28
Hello,

I am trying to learn of a mathematical way to compare differing photographic systems in a way that considers the field of view or magnification on a per pixel basis.

The reason I am asking about this is that I am hoping to be able to compare digiscope photographic equipment with traditional DSLR set ups.

I am currently making photographs of birds with a Canon 5DSR and a Canon EF 600mm f/4L IS II and the 1.4x Teleconverter, and I have recently purchased a Kowa 883 spotting scope with a 25x-60x zoom eyepiece.

I estimate that the 600mm with 1.4x optics on a full frame system equates to approximately 16x magnification.

I also own a Canon 7DmkII camera body. This camera is my first "crop sensor" camera as I have previously only owned 35mm film cameras and Full Frame digital cameras. Owning the 5DSR and 7DmkII at the same time has let me consider the implications of the term "crop sensor". These two cameras have almost identically sized pixels and so, when using the same size lens, the photographs made by the 7DmkII do indeed seem cropped compared to the photos made by the 5DSR.

This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.

This realization has caused me to have an interest in comparing my DSLR capabilities with the capabilities made available with the possible acquisition of digiscoping accessories for my Kowa 883 scope.

The micro four thirds Panasonic Lumix GH4 is frequently recommended as a good match to the Kowa spotting scope and there are accessorizes that make the use of such a camera easy. I'd like to start by making some comparisons based on that system.

I have been able to compare another photographer's pictures made with the Kowa 883, 20x-60x eyepiece set at 60x, a Panasonic Lumix GH4 with a 20mm f1.7 lens with a photo I made of the same subject from the same viewing location using my DSLR. This gave me some idea of the difference in effective "magnification".

Now I would like to learn how to mathematically understand the differences.

It occurs to me that learning the effective field of view that is represented by each pixel may be the most specific way to compare different digital imaging systems.

I can start with some basic ideas:

Canon 5DSR sensor 36mm x 24mm sensor = 864mm²

Canon 5DSR; 8688 pixels x 5792 pixels = 50,320,896 pixels

Canon 5DSR sensor's pixel size = 0.00414mm²

Panasonic GH4 sensor 17.3mm x 13mm sensor = 224.9mm²

Panasonic GH4: 4608 pixels x 3456 pixels = 15,925,248 pixels

Panasonic GH4 sensor's pixel size = 0.00375mm²

I am writing to ask for help and guidance in furthering an understanding.

What other information do I need to gather and how do I use it to arrive at an understanding?

Thank you.
 
Last edited:
Please rigorously define what you mean by "field of view" and "magnification"
 
I don't really want to use the term "magnification" and I am not prepared to rigorously define it. If I was asking about macro photography I would have an answer to offer but in this context I do not.

The only reason I mentioned the term is that spotting scopes and binoculars are described with "magnification" factors.

With respect to your question, I am asking about a context of a sensor's "pixel". In other words I am not referring to the actual size of a subject compared to some final display size of that subject. Nor am I referring to the relationship between the actual size of a subject and its actual size as projected on the camera's sensor.

With regards to the term "field of view" I do not know of any other definition of it then the literal. I anticipate that field of view is described as an angle, either horizontally, vertically, or diagonally. Perhaps I should use the term angle of view?

I am asking about relating the field of view that is represented by each pixel because it seems as if this is a way to bring differing systems into the same context.

Thank you.
 
To a first degree of approximation, the lens and sensor can be modeled by geometry: the system can simply be modeled by a triangle. The base of the triangle is the width of the sensor, while the triangle's height is the focal length. Some simple trigonometry can give you the angle of view, which is equal to the top angle. The angle of view of each pixel can be had by dividing the total angle of view divided by the number of pixels across the width of the sensor.

Now this makes a number of assumptions which will probably be good enough for your use: the focal length is defined at infinity focus and may likely be different if you are focusing closely; also this assumes that the lens projects a rectilinear image, where straight lines in real life are projected as straight lines on the sensor. Also, this takes no account of the optical aberrations of the lenses nor of diffraction, which will limit the detail you can extract from the image.
 
Does this thought process seem valid?:

Canon 5DSR full frame with 840mm lens (600mm and 1.4x teleconverter)

using an online Angular Field of View calculator I get these figures:

FOV (horizontal) (degrees): 2.5

FOV (vertical) (degrees): 1.6

FOV (diagonal) (degrees): 3

The Kowa scope is specified to equate "60x magnification" to a field of view that equals 1.32 degrees for the diameter of its circular view.

Without accounting for vignette effects I imagine that 1.32* can be compared to the diagonal FOV of the rectilinear framing in the Pansonic GH4.

The Pansonic GH4 uses a 4:3 aspect ratio so it is especially easy to use a 3:4:5 triangle ratio to calculate the horizontal and vertical angles.

3/5 x 1.32* = 0.792* vertical

4/5 x 1.32* = 1.056* horizontal

The 5DSR example has 8688 pixels to describe 2.5* horizontal

The GH4 example has 4608 pixels to describe 1.056* horizontal

The 5DSR example describes 2.8775e-4 horizontal degrees per pixel

The GH4 example describes 2.2917e-4 horizontal degrees per pixel

Thank you.
 
Hello,

I am trying to learn of a mathematical way to compare differing photographic systems in a way that considers the field of view or magnification on a per pixel basis.

The reason I am asking about this is that I am hoping to be able to compare digiscope photographic equipment with traditional DSLR set ups.

I am currently making photographs of birds with a Canon 5DSR and a Canon EF 600mm f/4L IS II and the 1.4x Teleconverter, and I have recently purchased a Kowa 883 spotting scope with a 25x-60x zoom eyepiece.

I estimate that the 600mm with 1.4x optics on a full frame system equates to approximately 16x magnification.

I also own a Canon 7DmkII camera body. This camera is my first "crop sensor" camera as I have previously only owned 35mm film cameras and Full Frame digital cameras. Owning the 5DSR and 7DmkII at the same time has let me consider the implications of the term "crop sensor". These two cameras have almost identically sized pixels and so, when using the same size lens, the photographs made by the 7DmkII do indeed seem cropped compared to the photos made by the 5DSR.

This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.

This realization has caused me to have an interest in comparing my DSLR capabilities with the capabilities made available with the possible acquisition of digiscoping accessories for my Kowa 883 scope.

The micro four thirds Panasonic Lumix GH4 is frequently recommended as a good match to the Kowa spotting scope and there are accessorizes that make the use of such a camera easy. I'd like to start by making some comparisons based on that system.

I have been able to compare another photographer's pictures made with the Kowa 883, 20x-60x eyepiece set at 60x, a Panasonic Lumix GH4 with a 20mm f1.7 lens with a photo I made of the same subject from the same viewing location using my DSLR. This gave me some idea of the difference in effective "magnification".

Now I would like to learn how to mathematically understand the differences.

It occurs to me that learning the effective field of view that is represented by each pixel may be the most specific way to compare different digital imaging systems.

I can start with some basic ideas:

Canon 5DSR sensor 36mm x 24mm sensor = 864mm²

Canon 5DSR; 8688 pixels x 5792 pixels = 50,320,896 pixels

Canon 5DSR sensor's pixel size = 0.00414mm²

Panasonic GH4 sensor 17.3mm x 13mm sensor = 224.9mm²

Panasonic GH4: 4608 pixels x 3456 pixels = 15,925,248 pixels

Panasonic GH4 sensor's pixel size = 0.00375mm²

I am writing to ask for help and guidance in furthering an understanding.

What other information do I need to gather and how do I use it to arrive at an understanding?

Thank you.
What you are probing is essentially plate scale. This website is OK but not fantastic on the matter,


The lens' field of view in degrees maps to size in the image plane according to the formula

ImageHeight = FocalLength * tan(FoV).

The focal length you are given 600*1.4 = 840mm.

The pixels in the 5DsR are 4.14um in size.

4.14um = 840 * tan(x) --> x = 4.929urad = 2.8241×10^-4 degrees.

840mm is the lens' actual focal length. Often photographers refer to "equivalent focal length" - this is not the number you use. If you use the 600mm lens with the 1.4x teleconverter on the GH4, the 840mm stays the same and you use the new linear pixel size.

I will leave working this for the other options to you.
 
Thank you very much.
Be aware that focal length is stated for infinity focus and usually drops as you focus closer, sometimes dramatically.

Why aren't you simply measuring magnification by photographing something of known size or a ruler? (At an appropriate subject distance given the above statement.)
For example Measuring Magnification

I mist admit. I'm not sure what you're trying to accomplish! :-)
 
Hello,

The reason I am not photographing something with a ruler is that I do not own both systems.

I purchased the spotting scope to use as an spotting scope, so I do own a scope, but I do not own any digiscope accessories.

I had a unique opportunity last week when I was in the field and stumbled upon a kind tourist who owned the same model scope as I. He had a digiscope set up on it, and it was a preferred combination that is endorsed by many prominent scopers. We photographed the same subject from the same distance and he shared one of his photos with me. That gave me a rough comparison of how one particular combination of accessories and camera choice compares to my DSLR equipment.

Afterwards as I read about various choices for digiscope accessories and the cameras which are preferred. I started wishing there was a way to make a mathematical comparison of the "telephoto" effect so I could filter out the misapplied terms.

The digiscope info sites use terms like "3000mm equivalent" and "70x magnification" and all sorts of statements that seem to add to my confusion.

To simplify, I do not want to buy several hundred dollars worth of accessories and another $1500-$2500 worth of camera and lens just to continue to make comparisons.

I have searched and found very little in the way of head to head comparisons made of the same subject, at the same distance, from the same shooting location. In practical terms that is what matters to me.

I think that a few years ago when a DSLR made a 21Mp image and a digiscope set up could also make a 21Mp image that comparisons like "3000mm equivalent" might have been loosely applicable. Now that I am making 50Mp DSLR photos and the sweet spot for digiscoping remains near 20Mp the differences are far less obvious.

Using the one off oppurtunity I mentioned above I compared the photo made by the digiscoper (Kowa 883, 60x eyepiece, 20mm lens, Panasonic GH4) with the photo I made with my DSLR (Canon 5DSR 600mm with 1 .4x extender) it seemed as if the digiscope example provided approximately 1.5x more "telephoto" effect. I consider this a lot less than I anticipated after observing all the enthusiasm for digiscoping.

This made me realize that there must be a good way to anticipate what sort of results you can get with differing systems and differing combinations of gear.

There are several factors such as avoidance of vignetting and a variable regarding how a digiscoper frames the image with the placement of their camera. Those factors tend to increase the difference in "telephoto" effect between the two types of systems beyond the result of the calculations shown above. My calculations suggest a 1.25x difference and my observation suggests a 1.5x difference.

Further considerations include ideas such as limiting the digiscope to a "40x magnification" eyepiece to get the best optical results. I can see that my spotting scope has an excellent view between 25x and 45x zoom setting and then the quality of the view drops as I zoom up to 60x. In practice, it seems that most serious digiscopers avoid the maximum zoom they have available. This consideration causes me to suspect that if I spend $3000 on digiscope gear and make serious first hand comparisons that I will end up thinking that the practical difference in "telephoto effect" between systems is nullified.

If someone introduces a micro 4:3 camera with 50Mp before DSLR systems go beyond 50Mp then a remarkable difference in telephoto capability will be restored. It occurs to me that while I wait to see what happens that it will be useful to be able to understand what the real, or at least practical differences are as the technology advances.

Thank you.
 
Last edited:
Hello,

The reason I am asking about this is that I am hoping to be able to compare digiscope photographic equipment with traditional DSLR set ups.

I am currently making photographs of birds with a Canon 5DSR and a Canon EF 600mm f/4L IS II and the 1.4x Teleconverter, and I have recently purchased a Kowa 883 spotting scope with a 25x-60x zoom eyepiece.

I estimate that the 600mm with 1.4x optics on a full frame system equates to approximately 16x magnification.
<snip>
The micro four thirds Panasonic Lumix GH4 is frequently recommended as a good match to the Kowa spotting scope and there are accessorizes that make the use of such a camera easy. I'd like to start by making some comparisons based on that system.

I have been able to compare another photographer's pictures made with the Kowa 883, 20x-60x eyepiece set at 60x, a Panasonic Lumix GH4 with a 20mm f1.7 lens with a photo I made of the same subject from the same viewing location using my DSLR. This gave me some idea of the difference in effective "magnification".

Now I would like to learn how to mathematically understand the differences.
<snip>
What other information do I need to gather and how do I use it to arrive at an understanding?
If your are interested in how many pixels you have across a subject, it is probably easier to work with linear pixel resolution than areas. AiryDiscus has explained the basics.

For your Canon 600 mm f/4 + 1.4x TC, focal length is 840 mm, pixel pitch is 0.00414 mm, so each pixel corresponds to 4.93 micro-radians - or 4.93 mm for a subject at 1000 m.

For the Panasonic GH4 + 20 mm f/1.7, pixel pitch is 0.00375 mm, corresponding to 188 micro radians. This reduces to 3.13 micro radians when combined with your 60x telescope.

The Kowa digiscope option puts 1.58 times more linear pixels across a subject - or 2.48x more over the same area. However, as Mark points out, you should also consider the effects of diffraction and lens aberrations.

Diffraction broadens a point source into an image disc roughly equal in diameter to the F-number in microns.

More precisely, in the absence of lens aberrations, the radius of the first dark ring of the Airy disk corresponds to an object field angle of {1.22 wavelength / LensDiameter}

For the Canon 600 mm f/4 (840 mm f/5.6) at 550 nm we have a diffraction-limited resolution of 4.47 micro radian.

For the 88 mm Kowa telescope (1200 mm f/13.6 as digiscope) this is 7.62 micro radian.

Applying a (very) crude root-sum-of squares combination of pixel size and Rayleigh resolution, we have 7 micro-radian for the Canon combination, and 8 micro-radian for the digiscope solution. The balance could swing in either direction if you take into account lens aberrations and limitation introduced by pixel pitch and colour filter demozaicing.

The diffraction limit is fundamental, and dominated by the lens diameter. A 50 Mp m43 camera would give only a marginal improvement - though you could reduce the magnification and increase the field of view for the same subject resolution.

If you restrict the digiscope magnification to 40x at 16 Mp, you will get sharper-looking images, but capture rather less subject detail, as the image will be smaller. At 60x the images will be rather softer, but may respond well to sharpening in post-processing.

The Canon combination is limited somewhat by the sensor resolution. A good quality 2x converter (or another 1.4x converter) should improve the resolution - with some reduction in "pixel-level sharpness" but less risk of aliasing and colour Moire.

If you can live with the size and weight, the larger aperture of the Canon captures roughly 3x more light, potentially offering better SNR or faster shutter speeds.

Hope this helps - life is full of compromises.

--
Alan Robinson
 
Last edited:
... We photographed the same subject from the same distance and he shared one of his photos with me. ...
Therefore, using those two photographs, you have a way of determining the relative magnification of the two systems.


--
Bill ( Your trusted source for independent sensor data at http://www.photonstophotos.net )
 
The underlined section is fundamentally incorrect. The cropping effect has nothing to do with pixel density. The sensor in an APS sized camera is physically smaller than that of a full frame camera, thus giving a narrower field of view or "telephoto" effect. Nothing more and nothing less.
This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.
 
The underlined section is fundamentally incorrect. The cropping effect has nothing to do with pixel density. The sensor in an APS sized camera is physically smaller than that of a full frame camera, thus giving a narrower field of view or "telephoto" effect. Nothing more and nothing less.
This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.
I invite you to re-read what you quoted.

It may be helpful if you also review the paragraph that preceded the passage which you quoted.
 
I read it several times, but could not understand how you could end up writing the underlined statement. I invite you to read what I wrote down because the concept is really simple.
The underlined section is fundamentally incorrect. The cropping effect has nothing to do with pixel density. The sensor in an APS sized camera is physically smaller than that of a full frame camera, thus giving a narrower field of view or "telephoto" effect. Nothing more and nothing less.
This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.
I invite you to re-read what you quoted.

It may be helpful if you also review the paragraph that preceded the passage which you quoted.
 
I read it several times, but could not understand how you could end up writing the underlined statement. I invite you to read what I wrote down because the concept is really simple.
The underlined section is fundamentally incorrect. The cropping effect has nothing to do with pixel density. The sensor in an APS sized camera is physically smaller than that of a full frame camera, thus giving a narrower field of view or "telephoto" effect. Nothing more and nothing less.
This has led me understand that the often mentioned characteristic "1.6x" and the idea that ASP-_ sensors have an increased telephoto effect when compared to full frame cameras is only applicable if the ASP-_ sensor has a higher density of smaller pixels on the camera's sensor.
I invite you to re-read what you quoted.

It may be helpful if you also review the paragraph that preceded the passage which you quoted.
One thing that has to be considered when tele-photographing small objects that do not fill the frame - and mister sunshine has correctly addressed this, albeit in a more general way - is that, whatever the size of the sensor, for a given lens or a given FL and a fixed distance, you end up with the subject occupying only a portion of the total frame - and it is always the same. Never mind "Equivalent FL", the setup I described gives an image of the subject (not the whole scene) with the same linear dimension (as pointed out above), whether you are using a FF sensor, a APS-C one or a MFT sensor.

So, apart from optical considerations (lens quality, or how well focussed), apart from exposure considerations, or noise, what matters is how many pixels does your subject have across in the frame, because you are going to crop it, even if only a bit. A bird photographer in these forums coined a few years ago the term "ppd - pixels per duck" to describe this.

BTW, the same can be said, ipsis verbis, about macro-photography, just replace "spider" for "duck"
 

Keyboard shortcuts

Back
Top