There are a lot of people who still (after all this time) are remembering the film legacy. Back then, photography jargon was full of meaninglessly subjective nonsense, because noone really understood colour film chemistry.
I don't engage in other hobbies to the same extent that I do with photography, but for non-pros, cameras seem a lot like hi-fi and cars. It's more about bar-room bragging rights than useful differences. Real world performance is subject to uncontrolled variables that are often more significant, such as the driver, the acoustics of a room, or where you got your film processing and printing done.
The medium format look, to me anyway, had far more to do with resolution, or grain/unit area, than anything else. After all, the emulsion was the same, so the only difference between Provia 100F in 35mm, 67 and 4X5 formats was the size of the negative.
I think the crucial factor is the degree of enlargement, rather than film or sensor size as such.
If you mean the ratio between the size of the image and the size of the negative, then yes. But we also have to take human acuity and contrast sensitivity into account. At high angular resolution (ie in small prints) pretty much anything looks good.
But if you make three decent sized prints - say 30X40", from 35mm, 67 and 4X5 all using the same film type, and view them at a reasonably close distance, like 20", the 35mm will look awful, the 67 will look OK, and the 4X5 will look great.
The fact that such prints are easily possible with a good APSC sensor at low ISO just shows how far we have come.
In film there is an old rule of thumb that says something like: you get a good print if you do not enlarge more than 10-12 times the size of the negative.
About right, except that fine grained ISO100 negatives could be enlarged a lot more than ISO400 negatives.
With digital the dominant rule of thumb is something like: you get a good quality print if you have 254 or 300 pixels per inch to work with
(both R-o-T, so don't make too much of the exact numbers)
Again, noise/grain is a factor. Smaller sensors have more noise...
Q: in how far is 'enlargement' in relation to the sensor size still an important factor?
...so you could regard each doubling in sensor area as equivalent to halving the ISO of the film in terms of equivalent grain.
But the threshold of grain tolerance is not the same as the threshold of detail detection, so it can all get rather complicated...
Hi,
Having shot film like 30 years and having scanned thousands of film images, I would say that film was a different animal from digital.
Best regards
Erik
I agree, but I was careful to qualify what I said.
In noise/grain terms, doubling ISO is roughly equivalent to halving the sensor area, or halving the negative size on a film. Of course, different film emulsions were hard to compare, so ISO ratings for different film were even less reliable than on digital cameras.
I didn't say the film and digital images will look the same. I can print A2 images from my APSC camera than look better than A4 prints from 35mm, but the main difference is grain, not resolution.
Hi,
My intention was not to be negative. It is just that I may feel that film grain and noise on digital cameras is different.
Sorry, came across as a bit tetchy. I agree they look different. Sufficiently so that 'grain' filters don't add pixel noise, but carefully crafted 'filmic grain'. But AFAIK, the main difference is it's variable size (larger than sensor noise and geometrically random).
However, when I scanned negatives, the scanner noise also added to the problem, and of course there is printer dithering on top of that, so in the end the noise is not easy to analyse.
Just to say, I have no issue with larger formats having an advantage.
In a way, that advantage is demonstrable. You can take two of DPReview's studio scene shots and measure SNR, DR, MTF or whatever is of interest.
In other words, it's just physics. It's mostly just resolution vs. noise.
Also, it seems that at least Fujifilm makes truly excellent lenses for the GFX cameras, the Hasselblad X-lenses are probably also very good. But, all MFD lenses are probably not created equal.
What makes me confused is more claims like better DR in the highlights, which is not a consequence of sensor size but of possibly of exposure strategi. Frequent claims of better color. The old 16-bit claim from Phase One and Hasselblad.
Almost all of that is pretty much contrafactual.
Another small example is the Hasselblad/Phase One proponents would always claim that the leaf shutter yields a huge advantage in certain situations, where short shutter speeds are needed in combination with high flash output.
Fujifilm proponents may claim that HSS solves all those problems. But, what HSS does is to prolong exposure to say 10 ms. If you shoot 1/1000 s, that means that approximately 90% of the light is caught by the shutter blade.
Most folks don't need high power flash at sync speeds. But, that doesn't mean that leaf shutters are not beneficial for shooters having specific needs.
True - but leaf shutters limit max shutter speed and can cause vignetting at their maximum. Better for flash photography, arguably less good for landscapes...
Many times it is also claimed that MFD images stand out on screen. That may be due to different magical factors, like the 3D-pop, better color, better DR etc.
But, on screen viewing is pretty limited. To me it seems to have been demonstrated that images processed identically are normally very similar. Color rendition may differ of course, but that would mostly depend on color profiles.
Blinds tests generally put the lie to this one, but see below...
I agree. There's a lot of pseudoscience and arm waving magical thinking going on.
Material differences derive mostly from resolution, but this does has a significant effect on colour because of Bayer interpolation. If you zoom in to all the cameras at 100%, they all look much the same, but there are fewer visible colour demosaicing errors as the angular resolution of the data increases.
In other words, we do see improvements in colour resolution if we over-sample the angular pixel resolution with respect to human sensitivity. Resolutions up to 100 pixels/deg should show some improvement in reproduction of details which have low green luminance (eg red/blue details on a neutral background).
100 pixels/degree at a close viewing distance (say 30 cm or 1 ft) is about 480 PPI. For a 24X18 inch image, that's about 100 megapixels.
But this will also show up on a large 4K display at a zoom ratio of about 50%.
It's also possible (unproveable) that the larger 33X44 format allows for some reduction in QE and SNR to improve colour filter response. Less adjustment during transformation would reduce channel noise.
However, I agree that by the time you account for all the other factors, the influence is likely to be minor.
What does seem to be true is that there is an increasingly conservative metering of midtones and subsequent tone adjustment to improve the overall response curve, again sacrificing SNR but simulating the film response above zone V. In other words, better highlight retention.
This of course is something we can engineer by simply adjusting exposure, but it's easier to do psychologically if we get a better rendition in the viewfinder during composition.