GFX100S MSRP at $5999 - Is the GFX line becoming a sensible upgrade for X-series owners?

I believe there are lenses that can better render depth in images than others. And I also I strongly believe it´s about the right amount of distance (i.e. focal length), blur and light. Here I found several images that I believe were taken with lenses that are good regarding rendering of depth in images:

5ddc58fe59c84956b38ca793bf6da219.jpg

1401da74fff84cb4a74b01984ba2cfcb.jpg

eeee59da79c44c71a75356f753fa707c.jpg
I beg to differ with the samples shown. All have a busy background characteristic to me. Last image for examples, the bokeh balls have defined edges, same for the first.

This "sharpness" or presence of defined virtual objects in the deep background distracts unlike real experienced depth of the world, at least in my view. Probable source is the aperture mechanism of this or most lenses.

Well, in reality, our eyes deliver a much stronger ability to render sharp backgrounds than most lenses under normal light conditions. Just try for yourself looking at your finger tip and try to sideglance at the background. No bokeh balls there :-). I also guess in "P" mode our eyes are at F8 even on overcast days indoors :-).

a) So this bokeh thing is an artificial thing in either way.

b) There is a different look in this case and I'd claim this is a lens thing and not a sensor size thing unless otherwise proven :-)

Busy, uneven bright bokeh disks
Busy, uneven bright bokeh disks

Very smoothly, evenly bright bokeh disks
Very smoothly, evenly bright bokeh disks

As you can see here, the "butt" image offers a much less distracting background even if the dog image also offers an outoffocus background, too.

Here is an example showing the difference between the trashy Zeiss 1670 and the cheapy 18105 at similar settings and camera. If you pay attention to the background, the 1670 delivers it in a less distracting way, so lens design even on same camera and sensor has a lot of effect.

http://luxorphotoart.blogspot.com/2015/05/gear-stuff-zeiss-pop-well-not-in-this.html

@Dodkin:

In my experience, I saw this "MF" look when I did brenizer shots at F8 on APSC. You still have low depth of field but the bokeh highlights are reduced in size due to the combining of multiple shots into one and the small aperture. See the sun star on the white car rear window and compare it to the GFX shots above.

[ATTACH alt="F8 Brenizer 4 shots - a different kind of "butt" :-)"]2699858[/ATTACH]
F8 Brenizer 4 shots - a different kind of "butt" :-)

This is different than the typical low DOF look with a wide open prime.

Nifty Fifty - Ignore the CA :-)
Nifty Fifty - Ignore the CA :-)

--
German/English Nex/A6000-Blog: http://luxorphotoart.blogspot.de/
 

Attachments

  • 21499f922e7a4a0aafa10378c07bb824.jpg
    21499f922e7a4a0aafa10378c07bb824.jpg
    376.6 KB · Views: 0
Last edited:
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
 
Good video.

The movie industry has utilized the improved dimensionality from larger formats for decades , and spent a lot of money to get that extra lifelike look into their content.

The Revenant is a recent example, Hateful Eight would be another, 2001 etc etc.
Actually, there have been several films in the recent years that particularly appealed to me with their visual quality. I discovered I discovered all of them were shot with a large format film/sensors. 🙂

On the other hand, in photography I would't guess sensor size in a blind test. The difference between aps-c and ff, or ff and Fuji mf is so small to me in real world pictures, that I find it mostly a matter of resolution.
That may be caused by the fact that most movies shot on film, as well as those shot with the first digital cinema cameras, including the Arri Alexa were shot in the "Super 35" format, which is a tad smaller than APS-C.

It is also important to keep in mind that a big portion of movies are shot wide on anamorphic lenses, which resolve things quite differently, so the jump in sensor size has definitely had a effect on the overall look.

https://www.provideocoalition.com/quantifying-the-large-format-look/

(The jump in sensor sizes has also allowed the use of some truly magnificent vintage lenses on digital, such as the Super Panavision 70 series, originally designed for the Panavison Ultra 70mm film cameras. You of all people know how much influence lenses have on the final look ;-) .)

Now what's also interesting is that most of the so called large format cameras essentially have FF sensors, while the "biggest and baddest" of all, the Arri Alexa 65 6K, has one large sensor, which is a wide as two FF sensors placed side by side. If that reminds you of something, that's because the Hasselblad X-Pan/ Fuji T-X1 did it some 20 years ago.
Hmm, it may be the lenses then. That with truly large format cameras the lenses can be not only soft in the background, but also dead sharp within the focus plane. Maybe that´s the key if one is looking for the ´spacious´ feel. And also the right choice of FL, I guess.
The physics of a lens is more or less determined solely by focal length. The field of view of a camera is determined by the FOV. For example a 35 on a APSC produces the same FOV as a 50 on a FF. On a 4x5 that FOV is produced by a 180 mm ( may not be exact but I'm do lazy to do the calculation right now and it is close). So But a 180 lens has a lot more axial magnification that a 35 mm lens and the projected image appears different.
 
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
My point was that these distractions are visible in bright outdoor conditions where our eyes would be still quite comfortable in stopping down their aperture. So I think that our current toneh :-) mania overdoes what is actually how we see.

And your samples show in my opinion a different visual quality to those of the GFX due to that. I don't claim to have better shots, but think that the brenizer example illustrates a bit what is going on.

If I crop into a low DOF shot, the bokeh discs get larger. So if I use a wide open prime lens on a smaller format this becomes more noticable. The reverse happens of course when going to a larger sensor format. To compensate for the crop one can go further away from the subject and open the aperture further. But this has non-linear effects on the image.
 
The write-up I gave you in the body of this thread is about as good as you'll find anywhere.

Most internet 'comparisons' have only demonstrated that photographer's looking to disprove something can take dimensionally flat images across multiple formats, even with really good equipment - which is brave of the photographers to admit, but there you go - maybe it makes the point for them.

None of them ever takes a MF image with clear depth and dimension, and then demonstrates how they could reproduce that on a smaller format.

If you can't/won't acknowledge the difference it's a moot point - you saved yourself the cost of the upgrade.

19936cb41d9e485283e106c7be6b4b82.jpg
Oh, I remember a video from a fabulous yt channel called mediadivision - it's video focuses, but I recommend it to everyone, the videos have incredibly high quality in terms of content. He speaks of dimensionality too and when taken to an extreme like he did, one really can see a little bit more true to life rendering:

The one fact and a very important fact that is often missing which Chris points out and the video shows us is that photographs do not obey the intuition of classical Euclidean geometry. In reality when on takes a photograph they are projecting a volume of three dimensional space onto a plane. That is projective geometry is the appropriate mathematics necessary to describe photography. That was true for the photointerpreter analyzing reconnaissance photographs to engineers working today in computer vision.

That is the camera sits at the "point at infinity" in projective space and the lens projects the space in the angle of view of the lens on the the image plane. If we move the image plane we change the image. The concept of distance does not really exist in the way our intuition perceives it in projective geometry. That we can not measure the true distance of a fixed object from its projection, the perceived distance changes as the change the orientation of the plane or the point at infinity.

This concept is extremely important in computer vision and when one is trying to estimate the actual size of an object.

"Who is taller?" Considering the link below - how tall is the woman in question.

https://dhoiem.web.engr.illinois.ed...2 - Projective Geometry and Camera Models.pdf

Euclidean geometry on the plane is insufficient to answer the question because length is not preserved.

The change of sensor size as shown in this video shows that there is a projective transform taking place to produce between the projective image produced by the two sensors. It is obvious that size is no preserved. Our brains have no issue in dealing with these issues since our brains have been programmed from birth to receive the world in projective space. But as computer vision is going to be necessary to move forward in a more autonomous world where machines have to be able to see - then the ability of computers to transform what cameras see in to estimates of space, object location and sizes in a three dimensional world become more and more important.

https://hal.inria.fr/inria-00548361/document

Yes the world is viewed differently through different lenses. The world is viewed differently as the sensor size changes. The expansion applied to the image of a smaller format sensor to match that of a larger format sensor is does not preserve distance. That is the Euclidean transform does not preserve distance in the projective image. That is what the video shows and that is exactly what Chris is speaking of.

Yes the medium format camera will view the world in a different way than the small format. The medium format will give a perception of more depth (not to mention smoother tonal gradations). The large format will expand on that. There is a huge difference in viewing a print of an image taken using a 6x7 negative vs a 35 mm negative. There is a reason traditional fine arts landscape is the preview of large format cameras 4x5 up. An image of the same filed of view taken by a 12x20 view camera is vastly different than taken by a 35 mm camera. The LF image has more depth while the 35 mm seems flat.

Clyde Butcher, often called the Ansel Adams of the Everglades is still alive and kicking. To quote Butcher, “I try to use the largest film possible for the particular subject I’m planning to photograph. So, if I have a huge, broad landscape, I use the 12×20 view camera. If I am photographing something like the Ghost Orchid I use a 4×5″ view camera." There is a reason for that.

https://clydebutcher.com

The GFX will provide a different feel for an image. The reason comes from the physics of how an image is produced and the appropriate mathematics to describe that physics is one that does not match with the concept of Euclidian distance in the image plane.

If one can see it or not - doesn't really matter. If one cares are not doesn't really matter. If it doesn't matter enough for one to want to drop the coin to buy a GFX 100 or larger format camera, that's their choice. If one is happy with how their APSC camera renders the world - no problem. However, if we don't understand the fundamental projective geometry and teach our self driving cars to work in projective space - then they are going to be running into each other. :-D

BTW in radar imagery, the raw image is equivalent to a "light field" that is a the three dimensional properties of the sector of space is preserved in the raw radar returns. The image is focused on an image plane after the fact through signal processing of the returns. The image plane can be changed and the returns refocused on a new plane. The image can be formed on any arbitrary plane that does not include the pointing vector for the radar to the scene center. However, any two different image planes will produce different looking versions of the scene - and distance and lengths are not preserved.

That fact makes radar imagery more difficult than optical imagery to interpret for a human.
Can you please help with the following hypothetical.

We build two cameras using sensors cut from the same source, one 33x44mm and one 24x36mm.

We mount the same lens on both cameras, and focus and frame them on a subject such that the sensors are concentric to the subject. The central portion of the larger sensor includes the entirety of the subject covered by the smaller sensor plus a border.

Within the area of overlap, it would seem that there can be no differences between the two images. There is no image expansion, no difference in any angles, one camera is simply cropping the image.

The only potential source of a perceived difference in “look” would be a result of the additional subject captured by the larger sensor.

In the long dimension, the larger sensor is 22% bigger.

Can you please explain the theory as to how this would lead to the phenomena of easily-observed differences between the two images in terms of “3d pop”, etc.?

Also, what is the threshold, if terms of sensor size ratio, where these phenomena emerge?

Thank you.
It seems that when the difference between sensor sizes is big enough, there is a difference in perspective:

Could you please summarize, using the example I gave?
 
It seems that when the difference between sensor sizes is big enough, there is a difference in perspective:

Could you please summarize, using the example I gave?
From watching, I think in summary there is hardly any difference (aside from resolutin and noise - as expected). There seems to be a difference in perspective for the very large format - but not sure what - if any - benefit that i supposed to have on the artistic rendering.
 
Hmm, it may be the lenses then. That with truly large format cameras the lenses can be not only soft in the background, but also dead sharp within the focus plane. Maybe that´s the key if one is looking for the ´spacious´ feel. And also the right choice of FL, I guess.
The physics of a lens is more or less determined solely by focal length. The field of view of a camera is determined by the FOV. For example a 35 on a APSC produces the same FOV as a 50 on a FF. On a 4x5 that FOV is produced by a 180 mm ( may not be exact but I'm do lazy to do the calculation right now and it is close). So But a 180 lens has a lot more axial magnification that a 35 mm lens and the projected image appears different.
What are the "physics"?

The look of a picture is determined by the combination of effective focal length, aperture and lens design. As I showed in the example of the 1670 versus 18105, even at the same focal length and aperture, the rendering of the out of focus areas can differ a lot by the different lens design.

Axial magnification? Not really the case here. What happens of course is that the max available aperture of such lenses has lower DOF than those wider lenses the smaller sensor formats use. You can partly compensate by using brighter lenses on the latter, but only to a certain extent. With "real" MF this may be difficult, with the GFX it still is feasible in my view.



118cd64f439d4b62b6ff403fb125e627.jpg



--
German/English Nex/A6000-Blog: http://luxorphotoart.blogspot.de/
 
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
My point was that these distractions are visible in bright outdoor conditions where our eyes would be still quite comfortable in stopping down their aperture. So I think that our current toneh :-) mania overdoes what is actually how we see.

And your samples show in my opinion a different visual quality to those of the GFX due to that. I don't claim to have better shots, but think that the brenizer example illustrates a bit what is going on.

If I crop into a low DOF shot, the bokeh discs get larger. So if I use a wide open prime lens on a smaller format this becomes more noticable. The reverse happens of course when going to a larger sensor format. To compensate for the crop one can go further away from the subject and open the aperture further. But this has non-linear effects on the image.
I think we´re quickly sliding off topic here, into background magnification via FL, bokeh quality, how human eyes work, about some bokeh mania (? I don´t know what ´toneh´ is, I believe you meant bokeh)...

The discussion was about alleged better perceived depth in images of GFX system over smaller sized systems (be it aps-c of FF) and if there is some, then what causes it. I´m inclined to an explanation that it´s a multifactorial issue, where the sharpness and contrast of GFX systems may play a role, but there are also the other crucial variables as I stated several comments above.

Long story short, I can see what Chris Dodkin shows in his images, but I can´t see the same it in the examples you posted and explained. (with no intention of being mean)
 
The write-up I gave you in the body of this thread is about as good as you'll find anywhere.

Most internet 'comparisons' have only demonstrated that photographer's looking to disprove something can take dimensionally flat images across multiple formats, even with really good equipment - which is brave of the photographers to admit, but there you go - maybe it makes the point for them.

None of them ever takes a MF image with clear depth and dimension, and then demonstrates how they could reproduce that on a smaller format.

If you can't/won't acknowledge the difference it's a moot point - you saved yourself the cost of the upgrade.

19936cb41d9e485283e106c7be6b4b82.jpg
Oh, I remember a video from a fabulous yt channel called mediadivision - it's video focuses, but I recommend it to everyone, the videos have incredibly high quality in terms of content. He speaks of dimensionality too and when taken to an extreme like he did, one really can see a little bit more true to life rendering:

The one fact and a very important fact that is often missing which Chris points out and the video shows us is that photographs do not obey the intuition of classical Euclidean geometry. In reality when on takes a photograph they are projecting a volume of three dimensional space onto a plane. That is projective geometry is the appropriate mathematics necessary to describe photography. That was true for the photointerpreter analyzing reconnaissance photographs to engineers working today in computer vision.

That is the camera sits at the "point at infinity" in projective space and the lens projects the space in the angle of view of the lens on the the image plane. If we move the image plane we change the image. The concept of distance does not really exist in the way our intuition perceives it in projective geometry. That we can not measure the true distance of a fixed object from its projection, the perceived distance changes as the change the orientation of the plane or the point at infinity.

This concept is extremely important in computer vision and when one is trying to estimate the actual size of an object.

"Who is taller?" Considering the link below - how tall is the woman in question.

https://dhoiem.web.engr.illinois.ed...2 - Projective Geometry and Camera Models.pdf

Euclidean geometry on the plane is insufficient to answer the question because length is not preserved.

The change of sensor size as shown in this video shows that there is a projective transform taking place to produce between the projective image produced by the two sensors. It is obvious that size is no preserved. Our brains have no issue in dealing with these issues since our brains have been programmed from birth to receive the world in projective space. But as computer vision is going to be necessary to move forward in a more autonomous world where machines have to be able to see - then the ability of computers to transform what cameras see in to estimates of space, object location and sizes in a three dimensional world become more and more important.

https://hal.inria.fr/inria-00548361/document

Yes the world is viewed differently through different lenses. The world is viewed differently as the sensor size changes. The expansion applied to the image of a smaller format sensor to match that of a larger format sensor is does not preserve distance. That is the Euclidean transform does not preserve distance in the projective image. That is what the video shows and that is exactly what Chris is speaking of.

Yes the medium format camera will view the world in a different way than the small format. The medium format will give a perception of more depth (not to mention smoother tonal gradations). The large format will expand on that. There is a huge difference in viewing a print of an image taken using a 6x7 negative vs a 35 mm negative. There is a reason traditional fine arts landscape is the preview of large format cameras 4x5 up. An image of the same filed of view taken by a 12x20 view camera is vastly different than taken by a 35 mm camera. The LF image has more depth while the 35 mm seems flat.

Clyde Butcher, often called the Ansel Adams of the Everglades is still alive and kicking. To quote Butcher, “I try to use the largest film possible for the particular subject I’m planning to photograph. So, if I have a huge, broad landscape, I use the 12×20 view camera. If I am photographing something like the Ghost Orchid I use a 4×5″ view camera." There is a reason for that.

https://clydebutcher.com

The GFX will provide a different feel for an image. The reason comes from the physics of how an image is produced and the appropriate mathematics to describe that physics is one that does not match with the concept of Euclidian distance in the image plane.

If one can see it or not - doesn't really matter. If one cares are not doesn't really matter. If it doesn't matter enough for one to want to drop the coin to buy a GFX 100 or larger format camera, that's their choice. If one is happy with how their APSC camera renders the world - no problem. However, if we don't understand the fundamental projective geometry and teach our self driving cars to work in projective space - then they are going to be running into each other. :-D

BTW in radar imagery, the raw image is equivalent to a "light field" that is a the three dimensional properties of the sector of space is preserved in the raw radar returns. The image is focused on an image plane after the fact through signal processing of the returns. The image plane can be changed and the returns refocused on a new plane. The image can be formed on any arbitrary plane that does not include the pointing vector for the radar to the scene center. However, any two different image planes will produce different looking versions of the scene - and distance and lengths are not preserved.

That fact makes radar imagery more difficult than optical imagery to interpret for a human.
Can you please help with the following hypothetical.

We build two cameras using sensors cut from the same source, one 33x44mm and one 24x36mm.

We mount the same lens on both cameras, and focus and frame them on a subject such that the sensors are concentric to the subject. The central portion of the larger sensor includes the entirety of the subject covered by the smaller sensor plus a border.

Within the area of overlap, it would seem that there can be no differences between the two images. There is no image expansion, no difference in any angles, one camera is simply cropping the image.

The only potential source of a perceived difference in “look” would be a result of the additional subject captured by the larger sensor.

In the long dimension, the larger sensor is 22% bigger.

Can you please explain the theory as to how this would lead to the phenomena of easily-observed differences between the two images in terms of “3d pop”, etc.?

Also, what is the threshold, if terms of sensor size ratio, where these phenomena emerge?

Thank you.
It seems that when the difference between sensor sizes is big enough, there is a difference in perspective:

Could you please summarize, using the example I gave?
That´s everything relevant I found on this topic, I can´t summarize it. Nor am I a defender of any ´MF look´. I think there is an advantage in sharpness and contrast of GFX resulting in adding little bit more of the ´true to life´ look of whatever is in focus, but other than that I don´t really know.

--
www.instagram.com/michal.placek.fotograf
www.facebook.com/michal.placek.fotograf
500px.com/mikepl500px
 
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
My point was that these distractions are visible in bright outdoor conditions where our eyes would be still quite comfortable in stopping down their aperture. So I think that our current toneh :-) mania overdoes what is actually how we see.

And your samples show in my opinion a different visual quality to those of the GFX due to that. I don't claim to have better shots, but think that the brenizer example illustrates a bit what is going on.

If I crop into a low DOF shot, the bokeh discs get larger. So if I use a wide open prime lens on a smaller format this becomes more noticable. The reverse happens of course when going to a larger sensor format. To compensate for the crop one can go further away from the subject and open the aperture further. But this has non-linear effects on the image.
I think we´re quickly sliding off topic here,
I agree. It would be a shame if this thread got locked.
into background magnification via FL, bokeh quality, how human eyes work, about some bokeh mania (? I don´t know what ´toneh´ is, I believe you meant bokeh)...
It is a portmanteau word, which originates from "comedy" Youtuber "Camera Conspiracies". I believe it was originally used in a video where he commented on fellow Youtuber Tony Northrup's use of very shallow depth-of-field as being excessive and labeled the achieved look "Toneh" (blend of "Tony" and "Bokeh").
The discussion was about alleged better perceived depth in images of GFX system over smaller sized systems (be it aps-c of FF) and if there is some, then what causes it. I´m inclined to an explanation that it´s a multifactorial issue, where the sharpness and contrast of GFX systems may play a role, but there are also the other crucial variables as I stated several comments above.

Long story short, I can see what Chris Dodkin shows in his images, but I can´t see the same it in the examples you posted and explained. (with no intention of being mean)
I'd like to thank all of you who contributed with images, experience and knowledge to this "Medium format look" sub-thread, as it was, at least to me, very very informative.
 
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
My point was that these distractions are visible in bright outdoor conditions where our eyes would be still quite comfortable in stopping down their aperture. So I think that our current toneh :-) mania overdoes what is actually how we see.

And your samples show in my opinion a different visual quality to those of the GFX due to that. I don't claim to have better shots, but think that the brenizer example illustrates a bit what is going on.

If I crop into a low DOF shot, the bokeh discs get larger. So if I use a wide open prime lens on a smaller format this becomes more noticable. The reverse happens of course when going to a larger sensor format. To compensate for the crop one can go further away from the subject and open the aperture further. But this has non-linear effects on the image.
I think we´re quickly sliding off topic here, into background magnification via FL, bokeh quality, how human eyes work, about some bokeh mania (? I don´t know what ´toneh´ is, I believe you meant bokeh)...

The discussion was about alleged better perceived depth in images of GFX system over smaller sized systems (be it aps-c of FF) and if there is some, then what causes it. I´m inclined to an explanation that it´s a multifactorial issue, where the sharpness and contrast of GFX systems may play a role, but there are also the other crucial variables as I stated several comments above.

Long story short, I can see what Chris Dodkin shows in his images, but I can´t see the same it in the examples you posted and explained. (with no intention of being mean)
Toneh :-)

I agree that I can see some differences in the pictures and tried to pin them down on points like out of focus rendering and possible explanations for that.

I still think that lens design has a big impact and that MAY be a factor of lenses for larger format sensors, but I admit I have no idea.

I personally see in Brenizer shots similar looks like those of larger formats, there was only one shown here.
 
I see your points, but am not sure the premises are completely accurate.

- The shots I uploaded are not all shots with the same lens. And for example if you look at shot no. 3, the background is much smoother.

- When it gets dark, turn on lights in your bathroom and have a drink of water directly from the tap - you will be so close to the tap that what you´ll see will be out of focus. And then tell me, if there are not any specular highlights with outlining, or if there are some. ;-) Our eyes´ rendering is much rougher that expected, but I believe our brain has some good image correction profile for them. :-)
My point was that these distractions are visible in bright outdoor conditions where our eyes would be still quite comfortable in stopping down their aperture. So I think that our current toneh :-) mania overdoes what is actually how we see.

And your samples show in my opinion a different visual quality to those of the GFX due to that. I don't claim to have better shots, but think that the brenizer example illustrates a bit what is going on.

If I crop into a low DOF shot, the bokeh discs get larger. So if I use a wide open prime lens on a smaller format this becomes more noticable. The reverse happens of course when going to a larger sensor format. To compensate for the crop one can go further away from the subject and open the aperture further. But this has non-linear effects on the image.
I think we´re quickly sliding off topic here,
I agree. It would be a shame if this thread got locked.
It sure would, it has been interesting so far.
into background magnification via FL, bokeh quality, how human eyes work, about some bokeh mania (? I don´t know what ´toneh´ is, I believe you meant bokeh)...
It is a portmanteau word, which originates from "comedy" Youtuber "Camera Conspiracies". I believe it was originally used in a video where he commented on fellow Youtuber Tony Northrup's use of very shallow depth-of-field as being excessive and labeled the achieved look "Toneh" (blend of "Tony" and "Bokeh").
Ah, OK.
The discussion was about alleged better perceived depth in images of GFX system over smaller sized systems (be it aps-c of FF) and if there is some, then what causes it. I´m inclined to an explanation that it´s a multifactorial issue, where the sharpness and contrast of GFX systems may play a role, but there are also the other crucial variables as I stated several comments above.

Long story short, I can see what Chris Dodkin shows in his images, but I can´t see the same it in the examples you posted and explained. (with no intention of being mean)
I'd like to thank all of you who contributed with images, experience and knowledge to this "Medium format look" sub-thread, as it was, at least to me, very very informative.
Yes, it has been truly informative indeed. I forgot about one more thing that may have a significant influence on the perceived space and depth in GFX images - the 4:3 ratio.

So far I found out the perceived depth and is improved by:

- sharpness and contrast within depth of field (here the GFX has an advantag

- height:width image ratio (I think the 4:3 works better than 3:2)

- choice of angle of view (I believe standard to not much wideangle works best)

- distance to subject (to emphasize the subject)

- choice of apperture value (the right amount of blur - not too little, but not too much)

- appropriate light

- lens´ rendering (even a little bit harsh oof areas can actually work quite well)

So these are my findings. :-)
 
With the Brenizer method - you use multiple images to get to the final image representation, so you're reducing the amount you have to magnify the CoC of the sensor image from each separate image to get to the final comp shot.

It replicates what you'd get if you used a larger sensor to take it as a single frame. You wouldn't have to magnify the image CoC as much to get to the final shot, when compared to shooting it on a small format sensor or film.

This is why you get the same feeling of depth rendition as larger format sensors using this method.

Of course you need to understand the relationship of FOV, and subject distance to camera and background, as you have demonstrated with that red classic car shot, so that you have a subject where the focus roll-off is visible, and provides sufficient data to the viewer to add to the perception of depth.

--
Your time is limited, so don't waste it arguing about camera features - go out and capture memories - Oh, and size does matter - shoot MF
 
Last edited:
would any X-series camera owner consider running the GFX100s next to its APS-C camera/cameras, instead of a top of the line FF?

Hard to say.

I would think the transition from APS to MF would be a very low percentage?

As a hobbyist/part time paid shooter I moved to Fuji for their small size and portability. I do mostly family and couple portraits.

I could not see a reason to move to MF unless photography was my career and sole source of income or if I had a robust income to dispose of. I would see myself purchasing a FF Sony over the GFX due to costs.

GKX100s 7700 Cnd + GF 45mm 2300 cnd . so roughly 10K for the upgrade to MF

or

Sony A7iii 2600 Cnd + Sigma 24-70mm f2.8 Art 1600 Cnd 4K to upgrade to FF

the 6K diff is quite a large gap. For the average Fuji consumer that wants to move to a bigger sensor the FF option is still the more affordable and realistic option.
 
Last edited:
would any X-series camera owner consider running the GFX100s next to its APS-C camera/cameras, instead of a top of the line FF?

Hard to say.

I would think the transition from APS to MF would be a very low percentage?

As a hobbyist/part time paid shooter I moved to Fuji for their small size and portability. I do mostly family and couple portraits.

I could not see a reason to move to MF unless photography was my career and sole source of income or if I had a robust income to dispose of. I would see myself purchasing a FF Sony over the GFX due to costs.

GKX100s 7700 Cnd + GF 45mm 2300 cnd . so roughly 10K for the upgrade to MF

or

Sony A7iii 2600 Cnd + Sigma 24-70mm f2.8 Art 1600 Cnd 4K to upgrade to FF

the 6K diff is quite a large gap. For the average Fuji consumer that wants to move to a bigger sensor the FF option is still the more affordable and realistic option.
Not really a true comparison you did. Price out the a7r4 with the top Sony instead of comparing high end Fujifilm gear vs the mid range Sony and off brand lens. All of a sudden the difference isn’t as large.
 
would any X-series camera owner consider running the GFX100s next to its APS-C camera/cameras, instead of a top of the line FF?

Hard to say.

I would think the transition from APS to MF would be a very low percentage?

As a hobbyist/part time paid shooter I moved to Fuji for their small size and portability. I do mostly family and couple portraits.

I could not see a reason to move to MF unless photography was my career and sole source of income or if I had a robust income to dispose of. I would see myself purchasing a FF Sony over the GFX due to costs.

GKX100s 7700 Cnd + GF 45mm 2300 cnd . so roughly 10K for the upgrade to MF

or

Sony A7iii 2600 Cnd + Sigma 24-70mm f2.8 Art 1600 Cnd 4K to upgrade to FF

the 6K diff is quite a large gap. For the average Fuji consumer that wants to move to a bigger sensor the FF option is still the more affordable and realistic option.
Not really a true comparison you did. Price out the a7r4 with the top Sony instead of comparing high end Fujifilm gear vs the mid range Sony and off brand lens. All of a sudden the difference isn’t as large.
Im coming at this as a perspective of a Fuji aps-c hobbyist getting into a FF setup which is more realistic than a hobbyist buying a MF setup. the options and pricing out in the market for FF makes it more attractive than MF.

Most of us have already come from FF. The Sony A7III is a more reasonable unit for a hobbyist to slip into and is a fantastic piece of equipment from my experience in renting one and friends with shooting this unit. Obviously the A7r4 is a top notch unit but the a7III is no slouch at all, even in 2021.

Not everyone has gas for the latest and greatsest and there are some really good deals out there
 
Last edited:
would any X-series camera owner consider running the GFX100s next to its APS-C camera/cameras, instead of a top of the line FF?

Hard to say.

I would think the transition from APS to MF would be a very low percentage?

As a hobbyist/part time paid shooter I moved to Fuji for their small size and portability. I do mostly family and couple portraits.

I could not see a reason to move to MF unless photography was my career and sole source of income or if I had a robust income to dispose of. I would see myself purchasing a FF Sony over the GFX due to costs.

GKX100s 7700 Cnd + GF 45mm 2300 cnd . so roughly 10K for the upgrade to MF

or

Sony A7iii 2600 Cnd + Sigma 24-70mm f2.8 Art 1600 Cnd 4K to upgrade to FF

the 6K diff is quite a large gap. For the average Fuji consumer that wants to move to a bigger sensor the FF option is still the more affordable and realistic option.
Not really a true comparison you did. Price out the a7r4 with the top Sony instead of comparing high end Fujifilm gear vs the mid range Sony and off brand lens. All of a sudden the difference isn’t as large.
Im coming at this as a perspective of a Fuji aps-c hobbyist getting into a FF setup which is more realistic than a hobbyist buying a MF setup. the options and pricing out in the market for FF makes it more attractive than MF.

Most of us have already come from FF. The Sony A7III is a more reasonable unit for a hobbyist to slip into and is a fantastic piece of equipment from my experience in renting one and friends with shooting this unit. Obviously the A7r4 is a top notch unit but the a7III is no slouch at all, even in 2021.

Not everyone has gas for the latest and greatsest and there are some really good deals out there
In fact, you can take an a7r3 with some nice Loxia or Voigtländer lenses and you will still be far away from the cost of even basic MF offerings. But that price difference into "perspective" with what one can see/or not see in normal viewing situations and the decision becomes quote clear. But then again, there are a good number of people out there spending hundreds of thousands on HiFi equipment because of how it sounds (to them).
 
But then again, there are a good number of people out there spending hundreds of thousands on HiFi equipment because of how it sounds (to them).
Some people spend thousands on sound equipment because the feel it gives a fuller richer rendering their beloved classical music. I know a one person that spend 25 thousand on the acoustics of his entertainment room - because fo the better sound produced. Did I notice - not on classical music which I can take or leave but when he allowed me to play one of my vintage Miles Davis tapes ( he also has a reel to reel recorder and I have some classic Davis tapes ) I could tell the difference between his entertainment room and my house. Is it worth a full acoustics remodeling of my house - not likely.

If one see no benefit in a larger format than APSC -then they should probably spend their money elsewhere. However, that does not mean that someone else doesn't see a difference, appreciate the difference and to time it is worth the cost. Photographic art is just that art and there are not absolutes in art except maybe one man's art is another man's garbage. I think I heard said one time that it is in the eyes of the beholder.
 
I think I heard said one time that it is in the eyes of the beholder.
So true (and the ear), and if one wishes to indulge one's self in art, be it visual or audio - why not? I have seen some quite superb images from MF, including in this very thread, and from X cameras too.
 
But then again, there are a good number of people out there spending hundreds of thousands on HiFi equipment because of how it sounds (to them).
Some people spend thousands on sound equipment because the feel it gives a fuller richer rendering their beloved classical music. I know a one person that spend 25 thousand on the acoustics of his entertainment room - because fo the better sound produced. Did I notice - not on classical music which I can take or leave but when he allowed me to play one of my vintage Miles Davis tapes ( he also has a reel to reel recorder and I have some classic Davis tapes ) I could tell the difference between his entertainment room and my house. Is it worth a full acoustics remodeling of my house - not likely.

If one see no benefit in a larger format than APSC -then they should probably spend their money elsewhere. However, that does not mean that someone else doesn't see a difference, appreciate the difference and to time it is worth the cost. Photographic art is just that art and there are not absolutes in art except maybe one man's art is another man's garbage. I think I heard said one time that it is in the eyes of the beholder.
Seeing a difference is not the same as seeing a difference and ascribing it to some set of misspecified or inapt scientific phenomena.

The question is, how does an image with a 22% wider view create discrete 3d and other phenomena that are easily discerned even when the image is dramatically compressed for display on line?

and

At what threshold does this phenomenon emerge?

It would be great if you could answer these questions, using the relevant sensors,
 
Last edited:

Keyboard shortcuts

Back
Top