winnegehetoch
Leading Member
I can with your raw example (not the jpg example, where I looked).No.Could RawDigger be used to replace G1 with G2 content?
Just as an experiment!
I thought you couldn't see a problem.
I haven’t downloaded the RAW example. I’m not inspecting an image, where the user can’t even be bothered to indicate (a simple rectangle or circle will do) where the “interesting parts” are found.
I had to dramatically expand the size of your samples to see the “lines” at all.
I do not dispute their existence, but I ask myself, what you have to do, in order to spot the “imperfection” in real life, if you’re not pixel-peeping.
Let’s take a simple example.
The 6000x4000 pixel final image will roughly demand an 8k monitor or the equivalent of 200% enlargement on a 4k monitor to create a somewhat “pixel-to-pixel” vertical/partial view (and yes I know, that monitors most often also use rgb dots of some form or orther, if we could only map RAW Beyer images to monitor RGB matrix directly… ;-)
How easy is it to spot the “unenhanced”, specially filtered and “non-enlarged” imperfections in real life at ordinary viewing distances?
I’m not disputing the facts, and the periodic need to crop/enlarge images in certain conditions, for further use, but how large an area of the actual RAW file will actually have visible “PhaseDetection lines” - AFTER conversion to display (eg via Lightroom), 16-bit TIFF or PNG or the rather limited jpg result (even at 95% “quality”)?
Let’s be honest. The impefections exist, your G1 example can illustrate the problem on my iPad Pro 10.5. If I enlarge your sample to nearly screen with and use my preferred viewing distance it’s borderline visible. Bigger enlargement or snout to screen: Easy! Your sample, good as it is, is hardly a “normal” image designed for public view under normal viewing conditions.
Could Panasonic do better? Certainly.
Can the effect be removed completely? Doubtful. Don’t know, really.
The “lines/banding” exists. Not disputed.
Is it “easily detectable on a monitor”, yes, but what do you have to do, to make it easily detected (how big an enlargement, any other manipulations, how exceptional capturing conditions etc.)? Especially for onlookers who do not KNOW or is not specifically LOOKING FOR IMPERFECTIONS of a known nature?
Then the question becomes: Is any normal digital mirrorless camera with current on sensor phase detection good enough even at several times the 24 megapixel resolution of the S5II cameras? Or the more direct question: Was the right tool selected for the purpose?
The flaw exists. Probably on any mirrorless camera with sensor based phase detection. Only difference is probably how good the masking is during “development”, and the “amount” of “blemishes” placed in what fashion on the sensor. In linear form, far easier spotted, than in “pseudo random” patterns here, there and everywhere on a sensor (the latter probably far more costly to produce, for all I know).
Is it EASILY detected? Under “normal” viewing conditions? Not knowing, there should exist imperfections and not being directed to parts, where they would exist?
That’s my point.
Whether the S5II can honor the quality requirements of the OP, only he can decide. I get the impression, that he is not all too happy.
So? Find another and better camera.
Just one example: Few if any of the old DSLR cameras did have Phase Detection pixels in their sensors; they had other problems, especially the need to adjust front/back focus on a per lens basis, multiple focal lengths for zooms, which drove some users nuts, while others said “focus not absolutely pin-point perfect every time, but almost always very good”. Add mirror slap, that had to be controlled, one way or other. Etc.
I’m not well versed in the large format camera sensor design solutions, so I’m not entering that particular “bees nest” without physical protection ;-)
I really value your tech approach to illustrate reality in one form or other. Makes it easier to make up my own mind based on real life examples. Enhanced or not.
Regards
