Is my thinking about equivalence right?

Started 4 months ago | Questions
OP Muster Mark Contributing Member • Posts: 588
Re: Is my thinking about equivalence right?
6

bobn2 wrote:

Muster Mark wrote:

All very good info! My point was just to demonstrate the kind of analysis you need to do to actually compare total light of an exposure and why the equivalence formula (pupil area x sensor area) doesn't actually work in practice.

In practice, it works just as well and no better that exposure. When you calculate exposure, you also don't generally take account of factors such as vignetting and lens transmission.

This is a really good point. Not sure why it didn't occur to me sooner, but yeah, kind of defeats the whole point of my post. Thanks for this! I don't think 90% of the people who commented actually read what I wrote and just wanted to give their 2 cents about equivalence. So thank you!

-- hide signature --

Cheers,
-Ian

 Muster Mark's gear list:Muster Mark's gear list
Olympus E-3 Olympus Zuiko Digital ED 50mm 1:2.0 Macro Olympus Zuiko Digital ED 12-60mm 1:2.8-4.0 SWD Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD
Mark Ransom
Mark Ransom Veteran Member • Posts: 7,670
Re: Is my thinking about equivalence right?
6

SrMi wrote:

I wonder, why do people care about equivalence, and how do they use it?

I am used to FF focal length and FF aperture (DOF) numbers and use the equivalence to select my m43/APS-C/MF lens and aperture. SImilar like (still) translating Fahrenheits into Celsius.

I used it to convince myself that I wouldn't be missing out on anything by making a move from APS-C to m4/3.  And so far that has been my experience.

 Mark Ransom's gear list:Mark Ransom's gear list
Pentax K-7 Pentax K-01 Olympus E-M5 II Pentax smc DA 15mm F4 ED AL Limited Pentax smc DA 18-135mm F3.5-5.6ED AL [IF] DC WR +6 more
OP Muster Mark Contributing Member • Posts: 588
Re: Is my thinking about equivalence right?
3

All round an excellent rebuttal.

-- hide signature --

Cheers,
-Ian

 Muster Mark's gear list:Muster Mark's gear list
Olympus E-3 Olympus Zuiko Digital ED 50mm 1:2.0 Macro Olympus Zuiko Digital ED 12-60mm 1:2.8-4.0 SWD Olympus Zuiko Digital ED 50-200mm 1:2.8-3.5 SWD
Mark Ransom
Mark Ransom Veteran Member • Posts: 7,670
Re: Is my thinking about equivalence right?
2

bobn2 wrote:

Muster Mark wrote:

All very good info! My point was just to demonstrate the kind of analysis you need to do to actually compare total light of an exposure and why the equivalence formula (pupil area x sensor area) doesn't actually work in practice.

In practice, it works just as well and no better that exposure. When you calculate exposure, you also don't generally take account of factors such as vignetting and lens transmission.

Lens transmission is automatically compensated by using through-the-lens metering so nobody gives it a second thought.

 Mark Ransom's gear list:Mark Ransom's gear list
Pentax K-7 Pentax K-01 Olympus E-M5 II Pentax smc DA 15mm F4 ED AL Limited Pentax smc DA 18-135mm F3.5-5.6ED AL [IF] DC WR +6 more
Iliah Borg Forum Pro • Posts: 28,755
Re: Is my thinking about equivalence right?
4

Mark Ransom wrote:

bobn2 wrote:

Muster Mark wrote:

All very good info! My point was just to demonstrate the kind of analysis you need to do to actually compare total light of an exposure and why the equivalence formula (pupil area x sensor area) doesn't actually work in practice.

In practice, it works just as well and no better that exposure. When you calculate exposure, you also don't generally take account of factors such as vignetting and lens transmission.

Lens transmission is automatically compensated by using through-the-lens metering so nobody gives it a second thought.

If metering and shooting apertures are the same, yes

-- hide signature --
RobBobW Contributing Member • Posts: 957
Re: Is my thinking about equivalence right?
1

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

Mark, you just said the exact same thing as I did.  A given unit area of film or sensor does not care how much film or sensor is around it in order to record the amount of light hitting it.  Light intensity/density is what is important.  Otherwise we would be using different exposures with different sized sensors.  A person can use a hand held light meter reading to determine the correct exposure regardless of the film or sensor format being used.  Yes more total light hits the larger piece of film for a given exposure, but that is only because more total light is needed to achieve the required light density per unit area.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

But this is also a function of the size of the individual light sensor pixels.  The more pixels per unit area, be it from sensor size or sensor resolution, the more noise, all other things being equal.

-- hide signature --
 RobBobW's gear list:RobBobW's gear list
Samyang 14mm F2.8 ED AS IF UMC Samyang 24mm F1.4
bobn2
bobn2 Forum Pro • Posts: 69,811
Re: Is my thinking about equivalence right?
8

Muster Mark wrote:

bobn2 wrote:

Muster Mark wrote:

All very good info! My point was just to demonstrate the kind of analysis you need to do to actually compare total light of an exposure and why the equivalence formula (pupil area x sensor area) doesn't actually work in practice.

In practice, it works just as well and no better that exposure. When you calculate exposure, you also don't generally take account of factors such as vignetting and lens transmission.

This is a really good point. Not sure why it didn't occur to me sooner, but yeah, kind of defeats the whole point of my post. Thanks for this! I don't think 90% of the people who commented actually read what I wrote and just wanted to give their 2 cents about equivalence. So thank you!

That doesn't often happen on these forums, so very well done to you.

I've just been looking at a You Tube video about why arguments never succeed in changing anyone's mind. Thanks for the counter example.

-- hide signature --

Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?

NowHearThis
NowHearThis Veteran Member • Posts: 4,229
Re: Is my thinking about equivalence right?
4

Not one time in for the past 30 years of shooting have I ever thought about equivalence while actually taking pictures.  And my photography has never been negatively impacted.  The same thing will hold true for you.

Remember: Equivalence will never help you get your lighting and composition right.  Those 2 things matter far more and getting them right will make a bigger and more positive impact in your photography.

-- hide signature --

NHT

 NowHearThis's gear list:NowHearThis's gear list
Panasonic Leica 12-60mm F2.8-4.0 ASPH Olympus E-M1 II Panasonic Lumix G 42.5mm F1.7
Serguei Palto Contributing Member • Posts: 737
Re: Is my thinking about equivalence right?
2

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

RobBobW is right!

What you are saying on F-numbers is correct, but there is no contradiction with what RobBobW is saying: "Faster lenses bring in more light per unit area, period."

I only want to add that in addition to the light intensity  the pixels size is another important factor.

For example, when we check for the noise we usually go to 1:1 view to resolve individual pixels. At this moment there is nothing to do with the total light captured by the sensor. It is just evident that what you see on your display at 1:1 view is tipically a small fraction of a whole sensor. If you use the lenses of the same focal length and sensors with the same pixel density then independently on the sensor size you will get the same S/N ratio at the same F-number (of course, same sensor technology and lens quality are assumed).

We can tell nothing on S/N ratio if we look just on a value captured by a single pixel. It is because a single pixel can't form an image. So we need many pixels. But we also are not able to judge on S/N if we look at the whole image at conditions when the individual pixels are not resolved. For example, it is well known that visible S/N depends on a distance from a viewer to a printed photo. The larger distance - the higher S/N. It is because we loose the resolution with increasing the distance. The human vision itself works as a low pass filter. The larger distance results in filtration the higher frequency components. So with increasing the distance we are narrawing the spectral pass band.

In other words the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image. Moreover, in case of different sensors we must compare the S/N at the same spectral band width defined explicitely by a sensor properties. The spectral band width is the difference between the highest frequency the sensor can capture and the lowest frequency. In ideal case the highest frequency is the half of the Nyquist frequency that is defined by the pixel size, while the lowest frequency is defined by the sensor size. In case of a sensor size is significantly higher than the pixel size, the lowest frequency can be considered as zero. Thus the sensor frequency band is just defined by the pixel size which is the main factor responsible for true (whole band) S/N .

In spectral terms, the total light captured by a sensor is just a magnitude of a single spectral component at zero frequency in an image, while an image consists of a huge number of the frequency components. Actually, the number of frequency components in the image  is equal to number of pixels in a sensor. For that reason the so-called EQ-"theory" based on a total light or just a magnitude of a single frequency component at zero frequency is a missleading concept.

In terms of spectral approach the FF-sensor performs better than m43 only because of the larger pixel size (the same number of the pixels is assumed).

If someone wants to learn something on S/N then EQ must be forgotten as a "theory". This concept has been created just as a practical tool for quick recalculating FOV and DOF in case someone uses cameras of different formats, and it can't be used for something else.

All what I am saying is implemented in my iWE raw processing software, which is free .

Best regards, SP

photonut2008
photonut2008 Veteran Member • Posts: 6,428
Re: Is my thinking about equivalence right?
2

rogerstpierre wrote:

I understand equivalency, but I just don't see the purpose if all you got is one system.

It becomes a factor when you crop.

-- hide signature --

DPR, where gear is king and photography merely a jester

 photonut2008's gear list:photonut2008's gear list
Nikon D800 Nikon D500 Nikon AF-S Nikkor 70-200mm f/2.8G ED VR Nikon AF-S DX Nikkor 16-85mm f/3.5-5.6G ED VR Nikon AF-S DX Nikkor 35mm F1.8G +16 more
glassoholic
glassoholic Veteran Member • Posts: 6,282
Re: Is my thinking about equivalence right?
5

NowHearThis wrote:

Not one time in for the past 30 years of shooting have I ever thought about equivalence while actually taking pictures. And my photography has never been negatively impacted. The same thing will hold true for you.

Remember: Equivalence will never help you get your lighting and composition right. Those 2 things matter far more and getting them right will make a bigger and more positive impact in your photography.

Yes, I shot 35mm, 120 6x7, and 4"×5" depending on what was needed. There were huge differences in quality, mainly grain, between the 3 formats. Other than the camera movements my Sinar 4"x 5" had, I find the differences between all the formats today to be minimal in comparison. I have made 1.5m wide prints from a good m43 file, and am perfectly happy... I could have only dreamed of that capability from a portable (non 4" × 5") film system.

If I download images from the best FF and MF digital sensors today, I marvel at how good they are when counting eyelashes at 200% on a good monitor and with my nose up against the screen. A similar m43 image is still stunning IMHO, but with a few less eyelashes to count clearly. At ISO 3200 and higher (processing RAW with DXO Deep Prime), m43 falls behind more, but if my picture can stand alone on subject matter, I still find it good enough. YMMV.

I have invested in fast glass to make the most I can of my m43 system and its weakness at very high ISO. For me, this rounds my system off as almost totally adequate for my needs. If I research FF, I find that to get the next, obvious, jump in capability (IQ), I would need to go heavier, bigger and more expensive (I do not see value in getting f1.8 FF primes, f4.0 FF zooms and a 24mp FF camera, even though, technically, I would still be getting an uptick in IQ at a similar or cheaper price, similar or less weight, and thus arguably even better value). There are too many other features I find in my m43 gear that I would have to lose or, simply, don't like the look or handling of those arguably better alternatives, or I would have to keep some m43 and either carry both systems or make hard decisions about what mix and match gear to take with me. Not for me. Running Olympus and Panasonic gear with different batteries, different lens features per system and different AF strengths, is where the line is drawn for me.

Funny thing is, even when I have fast for m43 glass with me for functions/ weddings, f1.8 or faster, I often have to stop down a couple stops to get two or more people in focus... FF would mean a couple stops more again.

Now if I was well off, retired with time on my hands (plus an inclination/ opportunity to travel and get up very early before sunrise), and had a 30" wide photo quality printer at home (with a room big enough to work with it), a Fuji MF 100mp camera and a cracking super wide angle lens would be a great way to amuse and amaze myself. As I will only likely achieve one of those objectives (by default inevitably), it will remain a dream, but all the power to those that can do that now, today. The very best IQ gear is indeed most delectable.

-- hide signature --

Addicted To Glass
M43 equivalence: "Twice the fun with half the weight"
"You are a long time dead" -
Credit to whoever said that first and my wife for saying it to me... Make the best you can of every day!

Mark Ransom
Mark Ransom Veteran Member • Posts: 7,670
Re: Is my thinking about equivalence right?
1

RobBobW wrote:

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

Mark, you just said the exact same thing as I did. A given unit area of film or sensor does not care how much film or sensor is around it in order to record the amount of light hitting it. Light intensity/density is what is important. Otherwise we would be using different exposures with different sized sensors. A person can use a hand held light meter reading to determine the correct exposure regardless of the film or sensor format being used. Yes more total light hits the larger piece of film for a given exposure, but that is only because more total light is needed to achieve the required light density per unit area.

I think where you lost me is when you said "it is a wash".  FF lenses don't automatically bring in 4x the light, but the larger sensor is able to capture more of what is provided.  To get equivalence you need to light up the smaller sensor 4x brighter, which requires an F-stop 2 stops lower/faster.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

But this is also a function of the size of the individual light sensor pixels. The more pixels per unit area, be it from sensor size or sensor resolution, the more noise, all other things being equal.

Did you read the link I provided?  Pixel size matters a lot less than you think it does.  Because in the end noise per pixel doesn't matter as much as noise per unit area of the picture.

 Mark Ransom's gear list:Mark Ransom's gear list
Pentax K-7 Pentax K-01 Olympus E-M5 II Pentax smc DA 15mm F4 ED AL Limited Pentax smc DA 18-135mm F3.5-5.6ED AL [IF] DC WR +6 more
acfo Senior Member • Posts: 1,209
Re: Is my thinking about equivalence right?
1

Muster Mark wrote:

[...]

If there is anything I should be accounting for that I am not (and you know how I might, e.g. t stops) I'd be curious to hear.
--
Cheers,
-Ian

"When I use a word," Humpty Dumpty said, in rather a scornful tone, "it means just what I choose it to mean—neither more nor less." "The question is," said Alice, "whether you can make words mean so many different things."

 acfo's gear list:acfo's gear list
Olympus PEN-F Olympus M.Zuiko Digital ED 9-18mm F4.0-5.6 Olympus M.Zuiko Digital 45mm F1.8 Olympus M.Zuiko Digital ED 12mm 1:2 Olympus M.Zuiko Digital ED 75mm F1.8 +7 more
RobBobW Contributing Member • Posts: 957
Re: Is my thinking about equivalence right?
1

Mark Ransom wrote:

RobBobW wrote:

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

Mark, you just said the exact same thing as I did. A given unit area of film or sensor does not care how much film or sensor is around it in order to record the amount of light hitting it. Light intensity/density is what is important. Otherwise we would be using different exposures with different sized sensors. A person can use a hand held light meter reading to determine the correct exposure regardless of the film or sensor format being used. Yes more total light hits the larger piece of film for a given exposure, but that is only because more total light is needed to achieve the required light density per unit area.

I think where you lost me is when you said "it is a wash". FF lenses don't automatically bring in 4x the light, but the larger sensor is able to capture more of what is provided. To get equivalence you need to light up the smaller sensor 4x brighter, which requires an F-stop 2 stops lower/faster.

Sorry, but that is completely incorrect. If the light intensity per unit area is the same, the exposures will be the same. Total amount of light only come into play as the numerator of the ratio to the area of the sensor. The author of your linked article states it correctly once then gets it mixed up later on.  What I meant by “it is a wash” is that more light is required to illuminate a full frame sensor to the desired light intensity.  The only relevance of total light is in obtaining the desired intensity.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

But this is also a function of the size of the individual light sensor pixels. The more pixels per unit area, be it from sensor size or sensor resolution, the more noise, all other things being equal.

Did you read the link I provided? Pixel size matters a lot less than you think it does. Because in the end noise per pixel doesn't matter as much as noise per unit area of the picture.

Sorry, I did not realize there was a hot link in your text  I have seen and read it now. The author has some good points, but is confused on a few issues and has several misleading statements resulting from not clearly stating what he meant.  So he contradicts himself or states the opposite of what he says later on.

-- hide signature --
 RobBobW's gear list:RobBobW's gear list
Samyang 14mm F2.8 ED AS IF UMC Samyang 24mm F1.4
kalisti
kalisti Contributing Member • Posts: 618
Re: Is my thinking about equivalence right?

RobBobW wrote:

Mark Ransom wrote:

RobBobW wrote:

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

Mark, you just said the exact same thing as I did. A given unit area of film or sensor does not care how much film or sensor is around it in order to record the amount of light hitting it. Light intensity/density is what is important. Otherwise we would be using different exposures with different sized sensors. A person can use a hand held light meter reading to determine the correct exposure regardless of the film or sensor format being used. Yes more total light hits the larger piece of film for a given exposure, but that is only because more total light is needed to achieve the required light density per unit area.

I think where you lost me is when you said "it is a wash". FF lenses don't automatically bring in 4x the light, but the larger sensor is able to capture more of what is provided. To get equivalence you need to light up the smaller sensor 4x brighter, which requires an F-stop 2 stops lower/faster.

Sorry, but that is completely incorrect. If the light intensity per unit area is the same, the exposures will be the same. Total amount of light only come into play as the numerator of the ratio to the area of the sensor. The author of your linked article states it correctly once then gets it mixed up later on. What I meant by “it is a wash” is that more light is required to illuminate a full frame sensor to the desired light intensity. The only relevance of total light is in obtaining the desired intensity.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

But this is also a function of the size of the individual light sensor pixels. The more pixels per unit area, be it from sensor size or sensor resolution, the more noise, all other things being equal.

Did you read the link I provided? Pixel size matters a lot less than you think it does. Because in the end noise per pixel doesn't matter as much as noise per unit area of the picture.

Sorry, I did not realize there was a hot link in your text I have seen and read it now. The author has some good points, but is confused on a few issues and has several misleading statements resulting from not clearly stating what he meant. So he contradicts himself or states the opposite of what he says later on.

I think its just crossed wires, both correct, same exposure is as you said, mark said "To get equivalence" which would mean (in my book at least) equal noise(ish), which you would need a greater intensity for MFT.

 kalisti's gear list:kalisti's gear list
Panasonic Lumix DMC-GX85 Panasonic Lumix DC-G9 Panasonic Lumix G Vario 45-150mm F4-5.6 ASPH Mega OIS Panasonic Lumix G 14mm F2.5 II ASPH Panasonic Leica 12-60mm F2.8-4.0 ASPH +2 more
bobn2
bobn2 Forum Pro • Posts: 69,811
Re: Is my thinking about equivalence right?
10

Serguei Palto wrote:

Mark Ransom wrote:

RobBobW wrote:

- the argument about total light is pointless as what is important is light density. Yes FF will bring in 4 times the light, but FF also has 4 times the surface area of sensor to illuminate, so it is a wash. Faster lenses bring in more light per unit area, period.

This is simply false. The reason apertures are measured by F-stops is because this equalizes the light density per unit area between lenses of different characteristics. A lens at F/1.2 will produce the same light density, no matter the focal length of the lens or the size of the sensor behind it. This means that a sensor with 4x the area really will collect 4x the light, when measured over the whole image, as long as the F-stops/T-stops of the lenses are the same.

The reason this matters is the nature of noise. The majority of noise in today's cameras is from photon shot noise, which is a property of the light itself and not of the lens or sensor or any other camera electronics. The only way to reduce shot noise is to collect more light. Whether you do this with a larger sensor, a larger aperture, or a slower shutter speed is immaterial.

You appear to have missed your last tutorial class, Mr Palto. I know that some members of your class are impressed by your displays, but the examiners will be less forgiving. They will not have embedded preconceptions and if they conclude that you are bending the physics to fit a 'political' position it won't go so well for you. Unfortunately, there is no opportunity to re-arrange the classes that you've been skipping, so I've added some annotations to your work. I hope that you'll find them helpful.

RobBobW is right!

What you are saying on F-numbers is correct, but there is no contradiction with what RobBobW is saying: "Faster lenses bring in more light per unit area, period."

This indicative of what I meant by playing to the crowd of your classmates, rather than performing what should be cold, objective science. You have chosen to identify yourself with RobBobW's statement that "Faster lenses bring more light per unit area, period." Whilst it is true that faster lenses bring more light per unit area (of course, dependent on usage of the word 'faster', which is a colloquialism), the addition of 'period' is intended to suggest that there is no more to be discussed after the making of that statement. Of course, there is plenty to be discussed.

I only want to add that in addition to the light intensity the pixels size is another important factor.

Factor of what? This compounds the emptiness of RobBobW's statement. He failed to provide the context within which light per unit area is significant, and now you claim to be identifying a 'factor' in the same unstated context.

For example, when we check for the noise we usually go to 1:1 view to resolve individual pixels.

I would avoid the use of 'we'. The rhetorical intent is probably to try to identify yourself with experts in the field. There is a reason that science is written in the passive voice, to avoid this sort of specious appeal to authority.

At this moment there is nothing to do with the total light captured by the sensor. It is just evident that what you see on your display at 1:1 view is tipically a small fraction of a whole sensor. If you use the lenses of the same focal length and sensors with the same pixel density then independently on the sensor size you will get the same S/N ratio at the same F-number (of course, same sensor technology and lens quality are assumed).

We can tell nothing on S/N ratio if we look just on a value captured by a single pixel. It is because a single pixel can't form an image. So we need many pixels. But we also are not able to judge on S/N if we look at the whole image at conditions when the individual pixels are not resolved. For example, it is well known that visible S/N depends on a distance from a viewer to a printed photo. The larger distance - the higher S/N. It is because we loose the resolution with increasing the distance. The human vision itself works as a low pass filter. The larger distance results in filtration the higher frequency components. So with increasing the distance we are narrawing the spectral pass band.

There is a lot of confused language, false propositions and poor reasoning here, mixed in with some correct or partially correct statements. Probably the best approach is to rephrase what you're trying to say.

There is no SNR for a single pixel because SNR is a statistical measurement, with noise being the standard deviation of a set of observations from an expected or mean value. Noise is bandwidth dependent, that is, noise power increases with the frequency of the observation. For this reason it is established practice to state the frequency band over which an SNR is measured, and when two SNRs are to be compared, which have been measured over different frequency bands, to normalise them. The precise form of normalisation depends on the purpose of the comparison. I've noticed a tendency in your previous work to fail to understand the importance of context, and to seek to apply textbook formulae in places where they are unhelpful without appropriate normalisation.

In other words the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image. Moreover, in case of different sensors we must compare the S/N at the same spectral band width defined explicitely by a sensor properties. The spectral band width is the difference between the highest frequency the sensor can capture and the lowest frequency. In ideal case the highest frequency is the half of the Nyquist frequency that is defined by the pixel size, while the lowest frequency is defined by the sensor size. In case of a sensor size is significantly higher than the pixel size, the lowest frequency can be considered as zero. Thus the sensor frequency band is just defined by the pixel size which is the main factor responsible for true (whole band) S/N .

Again, this ignores the whole concept of normalisation. Saying "the S/N must be defined at conditions when there are no additional filters which can influence the spectral band width of an image" does not make it so. How the SNR 'must be defined' depends on what you want to use the SNR for. In the context of photography, most photographers will want to use SNR as an indicator of how noisy their photos will look. Therefore, when adopting SNR as a photographic metric it makes sense to normalise it to conditions which reflect the normal usage. There is space for a discussion about what should be the normalisation adopted, but whatever that is, comparing differently normalised SNRs doesn't make any practical sense.

In spectral terms, the total light captured by a sensor is just a magnitude of a single spectral component at zero frequency in an image, while an image consists of a huge number of the frequency components. Actually, the number of frequency components in the image is equal to number of pixels in a sensor. For that reason the so-called EQ-"theory" based on a total light or just a magnitude of a single frequency component at zero frequency is a missleading concept.

You should check the statements you make for logical consistency against other broadly accepted concepts. 'Total light' is simply exposure multiplied by sensor area. So everything that you say about 'total light' apples equally to exposure, yet exposure has been the central concept in sensitometry since its inception. The value of exposure as typically used for exposure calculations is simply the integration of the exposure at each element of the image. That exposure does not change dependent on the size of the element or the number. You seem to be suggesting that the value for 'total light' would be dependent on the sampling frequency, which is obviously wrong. If your conjecture was correct, the exposure would also be dependent on sampling frequency, and it isn't.

In terms of spectral approach the FF-sensor performs better than m43 only because of the larger pixel size (the same number of the pixels is assumed).

The assumption makes the pixel size question irrelevent, since if you assume the same number of pixels, pixel area is proportional to sensor area - and you cannot distinguish whether it is sensor size or pixel size which is responsible. If your theory is that it is pixel size, then it is erroneous, and easily disproven with experimental results. If correct, then a micro Four Thirds camera should produce the same apparent noisiness as a FF camera with the same pixel size.

Clearly, your proposition does not predict reality.

If someone wants to learn something on S/N then EQ must be forgotten as a "theory". This concept has been created just as a practical tool for quick recalculating FOV and DOF in case someone uses cameras of different formats, and it can't be used for something else.

Again, it is sensible to avoid statements about what people should do if they 'want to learn' until you have done the learning yourself. Providing a set of statements which are very clearly incorrect and easily disproven does not give any confidence in how much learning has been accomplished.

All what I am saying is implemented in my iWE raw processing software, which is free .

It may be a good idea to expend more time on your studies and less on hobby activities, at least until you have mastered this part of the curriculum. In any case, given the number of fallacies and false statements in what you have said, telling people that it is 'implemented' in your software probably isn't likely give them much confidence in the software.

-- hide signature --

Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?

bobn2
bobn2 Forum Pro • Posts: 69,811
Re: Is my thinking about equivalence right?

RobBobW wrote:

What I meant by “it is a wash” is that more light is required to illuminate a full frame sensor to the desired light intensity. The only relevance of total light is in obtaining the desired intensity.

The 'desired intensity' is set by your requirements for image quality (noise), and for any given requirement will be different full-frame and mFT.

-- hide signature --

Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?

RSTP14 Veteran Member • Posts: 5,090
Re: Is my thinking about equivalence right?
2

bobn2 wrote:

Olympus has to produce double the resolution of the Sony just to match it in a final image, due to being magnified twice as much, then there's not much of a comparison. You have to compare the Olympus lines with the lined for half the lp/mm on the Sony. On FF 10 lp/mm corresponds to 215 lp/ph, whilst on FF that final resolution is given by 20 lp/mm, so if we compare the two at the same final resolution, we find that the Sony gives an MTF of 0.93 in the centre and 0.85 at the edge, whilst the Olympus gives 0.77 in the centre and 0.69 at the edge.

You can't talk "magnification" with a digital sensor like you do with a film substrate. On a digital sensor the number of pixels, or data points you may want to call it, defines the size of the image, not the size of the sensor itself. Hence a sensor 1/2 the size with twice the number of pixels will produce an image twice larger when viewed at the same "magnification". This is where I don't understand when magnification is used as a variable to compare DoF between different sensor size. It has no relevance in a digital world, rather the total number of pixels that makes up an image does.

-- hide signature --

Roger

 RSTP14's gear list:RSTP14's gear list
Olympus OM-D E-M10 Olympus M.Zuiko Digital 17mm F1.8 Olympus 12-45mm F4 Pro +4 more
bobn2
bobn2 Forum Pro • Posts: 69,811
Re: Is my thinking about equivalence right?
12

rogerstpierre wrote:

bobn2 wrote:

Olympus has to produce double the resolution of the Sony just to match it in a final image, due to being magnified twice as much, then there's not much of a comparison. You have to compare the Olympus lines with the lined for half the lp/mm on the Sony. On FF 10 lp/mm corresponds to 215 lp/ph, whilst on FF that final resolution is given by 20 lp/mm, so if we compare the two at the same final resolution, we find that the Sony gives an MTF of 0.93 in the centre and 0.85 at the edge, whilst the Olympus gives 0.77 in the centre and 0.69 at the edge.

You can't talk "magnification" with a digital sensor like you do with a film substrate. On a digital sensor the number of pixels, or data points you may want to call it, defines the size of the image, not the size of the sensor itself. Hence a sensor 1/2 the size with twice the number of pixels will produce an image twice larger when viewed at the same "magnification". This is where I don't understand when magnification is used as a variable to compare DoF between different sensor size. It has no relevance in a digital world, rather the total number of pixels that makes up an image does.

I'm talking about the dimensional magnification from the size of the camera's image frame to the viewing size. Call it 'enlargement', or what you want. Whether the mechanism is chemical or electronic, the image that the lens projects has to be enlarged to the size that you want to view it. If your image frame is 17.3x13mm it needs to be enlarged twice as much to get any given size output as it does if it is 36x24mm. Your point has no impact on what I was saying, that the image the lens projects must be enlarged twice as much on mFT as on FF. It also has no impact on DoF, where the important factor is the saze of the blur in the viewed image. To take into account the doubled enlargement DoF calculations for mFT use a CoC that is half the size as they do for FF, resulting in the same size CoC in the viewed image.

-- hide signature --

Is it always wrong
for one to have the hots for
Comrade Kim Yo Jong?

RSTP14 Veteran Member • Posts: 5,090
Re: Is my thinking about equivalence right?
2

bobn2 wrote:

rogerstpierre wrote:

bobn2 wrote:

Olympus has to produce double the resolution of the Sony just to match it in a final image, due to being magnified twice as much, then there's not much of a comparison. You have to compare the Olympus lines with the lined for half the lp/mm on the Sony. On FF 10 lp/mm corresponds to 215 lp/ph, whilst on FF that final resolution is given by 20 lp/mm, so if we compare the two at the same final resolution, we find that the Sony gives an MTF of 0.93 in the centre and 0.85 at the edge, whilst the Olympus gives 0.77 in the centre and 0.69 at the edge.

You can't talk "magnification" with a digital sensor like you do with a film substrate. On a digital sensor the number of pixels, or data points you may want to call it, defines the size of the image, not the size of the sensor itself. Hence a sensor 1/2 the size with twice the number of pixels will produce an image twice larger when viewed at the same "magnification". This is where I don't understand when magnification is used as a variable to compare DoF between different sensor size. It has no relevance in a digital world, rather the total number of pixels that makes up an image does.

Call it 'enlargement', or what you want. Whether the mechanism is chemical or electronic. the image that the lens projects has to be enlarged to the size that you want ti view it. If your image frame is 17.3x13mm it needs to be enlarged twice as much to get any given size output as it does if it is 36x24mm. Your point has no impact on what I was saying, that the image the lens projects must be enlarged twice as much on mFT as on FF. It also has no impact on DoF, where the important factor is the saze of the blur in the viewed image. To take into account the doubled enlargement DoF calculations for mFT use a CoC that is half the size as they do for FF, resulting in the same size CoC in the viewed image.

You obviously don't understand how a digital image can be viewed. The size of the photosite that makes up the data point has no relevance when it come to the representation of a single RGB value. 1pixel is 1 pixel regardless of how big the photosite that captured the data is. A 20mpix image viewed by any means will always be the same size regardless of how large the sensor that captured the image is. The only difference from the sensor's pov will be in the signal to noise ratio. The larger the photosite, the stronger the signal. The quality of the lens optical components however, will dictate the resolution of the projected image, hence M43 lenses have to be optically superior to equal the same resolution with an image circle that is 1/2 the size indeed but that has nothing to do with the visualization of the image captured once it has been captured. If there is any difference it has to do with the size of the image circle, not the representation of the image captured as you don't have to "enlarge" an image that has the same pixel count as one captured with a larger sensor any more so.

-- hide signature --

Roger

 RSTP14's gear list:RSTP14's gear list
Olympus OM-D E-M10 Olympus M.Zuiko Digital 17mm F1.8 Olympus 12-45mm F4 Pro +4 more
Keyboard shortcuts:
FForum MMy threads