S5 4256x2848 RAW - How?

Started Mar 15, 2007 | Discussions thread
Moshe Vainer Contributing Member • Posts: 819
Answer on why more resolution on fuji

Didn't see your other question, so here's an explanation on why people say there's more resolution in the fuji than in other 6mp cameras.

It boils down to 2 things -

1. Resolution is not counted in MP, it is counted in how much information the image holds. Traditionally, this has been in resolved lines (see the resolution targets in Phil's reviews or in other reviews.). To explain this here are couple examples:

a. Different cameras employ different anti-alias filters with different strength (and some don't have anti-alias filters, but at the cost of introducing aliasing, AKA as false detail), those optically reduce the information that is arriving to the sensor, so regardless of how much the sensor resolves, if the information is cut before it arrives to the sensor, it will not be present in the image.

b. Most sensors resolve one color at each pixel location, and then (admittedly this is a simplification) mesh together neighboring pixels to get the full color information. This results in the fact that images from traditional bayer pattern cameras is less detailed than what you'd achieve let's say in a B&W photo if the cameras didn't have bayer pattern. Foveon sensors for example (used in sigma cameras) don't have a bayer pattern and resolve all three colors at each pixel location, therefore they have higher per pixel resolution than bayer based cameras (though obviously a 16mp bayer sensor will usually outresolve 3mp foveon sensor).

c. Cameras avoid noise in pictures by applying noise reduction techniques. Once again, generalizing and significantly simplifying, noise reduction eventually comes down to blurring, which results in less detail. So if one sensor is inherently very noisy, and another is less noisy (at the level of the chip itself) to get the same image quality noise-wise, one would blur more than the other, resulting in a less detailed image.

2. The biggest difference however in a fuji sensor comes from the fact that it is oriented differently than rectangular sensor.
In a rectangular sensor, pixels are located as following:



If you'll draw such an image on a piece of paper, and then measure a distance between rows with [a] and [b] is sqrt(2) more than distance between rows with [x] and [y]. The reason for that is explained in the famous Pithagoras theorem - a^2 + b^2 = c^2.

However, on this planet, we have a thing called gravity. Gravity causes natural objects to align on vertical and horizontal axis, rather than on angular axis.

Fuji has noticed that strange behaviour of our planet , and decided that it is better to optimze the sensor for this phenomena.

Therfore, they rotated the sensor at 45 degrees. I will have trouble drawing this with ASCII characters, but you can do the exercise on a piece of paper, or lookup for fuji sensor explained in this site's reviews.

The result is that the sensor has now sqrt(2) times closer rows for vertical and horizontal axis, but sqrt(2) times more distant rows for diagonal axis.

If your subjects obey the laws of gravity, you will gain 2 times more resolution where it counts most, and loose 2 times resolution where it should count less (hence jaggies on diagonal lines)

See my other post in this thread on how this further dealt with in raw conversion.

All the best,

Keyboard shortcuts:
FForum PPrevious NNext WNext unread UUpvote SSubscribe RReply QQuote BBookmark MMy threads
Color scheme? Blue / Yellow