Ab Latchin
Senior Member
I just finished editing some images for my brothers new album release, the main image was shot on a gfx 50s.
I wasn’t the photographer, but between my brothers requests to both the photographer and designer working on the project his print proof came back a shock to him, so he asked me to do some editing and prepress for him.
When I got the RAW file it was at iso1250 and was very underexposed, I couldn’t say why as I wasn’t at the shoot. But it was, it required a lot of exposure adjustment before the final look was applied.
On my drive home it got me thinking, as some of you know from my last post I’m thinking about combining 43rds with this mini medium format instead of FF. I was thinking that as we all know with the equivalence arguments here, for the same light gathering there is very little IQ differences between formats of a similar age or technology.
But as I have been listening to comparisons between FF user’s comparing the GFX bodies to their FF they would speak about how the larger sensor and greater bit depth produces a better overall file, tonality, DR and colour being brought up a lot.
Here where my thinking went, and where I want to hear your thoughts. If IQ pivots entirely around total light gathered vs sensor size, this must be true between FF and GFX bodies. If they were to shoot with the same total light, would they see the difference, does a16bit file produce a better image from the same amount of data (light)?
Is that making sense?
I wasn’t the photographer, but between my brothers requests to both the photographer and designer working on the project his print proof came back a shock to him, so he asked me to do some editing and prepress for him.
When I got the RAW file it was at iso1250 and was very underexposed, I couldn’t say why as I wasn’t at the shoot. But it was, it required a lot of exposure adjustment before the final look was applied.
On my drive home it got me thinking, as some of you know from my last post I’m thinking about combining 43rds with this mini medium format instead of FF. I was thinking that as we all know with the equivalence arguments here, for the same light gathering there is very little IQ differences between formats of a similar age or technology.
But as I have been listening to comparisons between FF user’s comparing the GFX bodies to their FF they would speak about how the larger sensor and greater bit depth produces a better overall file, tonality, DR and colour being brought up a lot.
Here where my thinking went, and where I want to hear your thoughts. If IQ pivots entirely around total light gathered vs sensor size, this must be true between FF and GFX bodies. If they were to shoot with the same total light, would they see the difference, does a16bit file produce a better image from the same amount of data (light)?
Is that making sense?