"Water is not light" contd. :-)

Photozopia wrote:
Tom2572 wrote:
GaryW wrote:
blue_skies wrote:

Now, not to confuse anything, but until you hit the max-ISO, you can use FF and crop size sensors similarly, and the results will be quite comparable. This is why P&S cameras can produce such great pictures in the middle of the day, at ISO 100 they are very sharp, with extreme DOF. A larger size sensor has no benefit here other than producing shallower DOF (which may even be undesirable).
I wish this were completely true, but I have P&S photos with blown highlights that say otherwise. ;-) Seriously, look at DxO, and it appears to me that larger sensors have more dynamic range. Here again, I think it's the advantage of larger photosites/pixels -- you can capture more "water" with the same exposure. ;-) The advantage is not just less noise at high ISO, but a bigger bucket to work with gives finer control.
LOL, after all the heartburn someone finally acknowledging my esoteric rainfall analogy....
This is why the last thread descended into chaos - everyone saying 'container size' has no effect on the depth of water - 'everything collects one inch' .... but then contradict their analogy by saying bigger 'buckets' hold more water ... or that smaller ones overflow .... when they believe 'one inch' is collected equally in each.

Can't have it both ways - either everything gets the same exposure by AREA (an inch of water - no matter the size of receptacle) or it doesn't.

Everything gets equal exposure (rain/light/snow/duck-sh*t hits).

Water certainly does not act as light, nor in analogy ....

Which is why I 'retitled' the last thread ... used as the title here, in continuity - "Water is not light".

It isn't ... and should not be used as analogy .... and you lot should stop wasting your time holding contradictory views based upon it. Or accept that equal AREA gets equal exposure and processor efficiency is what you are all trying to reconcile in 'noisier' images .... as in RGBaker post above :-D
Container sizes (measured as length x width with vertical sides) have no effect on the depth of water (if rain is a constant), and bigger containers do hold more water, so yes, we can have it both ways. As long as neither container overflows, all bigger containers have is an increased volume of water at the end of a set period of time.

I'm sorry if you don't see how that analogy has anything to do with the light collecting ability of a larger frame sensor of 16mp compared to a smaller frame sensor of 16mp using the same exposure but to me it's as clear as day.
 
Last edited:
OK. Does a five acre field of corn need less inches of rainfall per month to keep watered than a one acre field of corn does?
 
SQLGuy wrote:

OK. Does a five acre field of corn need less inches of rainfall per month to keep watered than a one acre field of corn does?
 
RGBaker wrote:
This is why the last thread descended into chaos - everyone saying 'container size' has no effect on the depth of water - 'everything collects one inch' .... but then contradict their analogy by saying bigger 'buckets' hold more water ... or that smaller ones overflow .... when they believe 'one inch' is collected equally in each.

Can't have it both ways - either everything gets the same exposure by AREA (an inch of water - no matter the size of receptacle) or it doesn't.

Everything gets equal exposure (rain/light/snow/duck-sh*t hits).

Water certainly does not act as light, nor in analogy ....

Which is why I 'retitled' the last thread ... used as the title here, in continuity - "Water is not light".

It isn't ... and should not be used as analogy .... and you lot should stop wasting your time holding contradictory views based upon it. Or accept that equal AREA gets equal exposure and processor efficiency is what you are all trying to reconcile in 'noisier' images .... as in RGBaker post above :-D
No, everything does collect one inch of water, regardless of the container size. Rainfall, like light, is measured as density. An inch of rain delivers a 'blanket' of rain an inch thick over everything it falls on. It is measured by its thickness, not its volume -- and so an 1" of rain means the same to every point it fall on, regardless of size. Light is measured the same way, which is why my post above holds, not despite. Both light and rainfall are measured/described as the density of the fall.

In the light comparison, the sensor is responding not to a 'volume' of light but to a density of light.

HTH,
GB
GB - stop with the rainfall analogy - I've told you that you do not read my posts correctly. You confuse my comments on rainfall density - equal exposure ... with Euclidian based measure.

Read it again. Tom, and others, agree with you/me that rainfall is 'equal' exposure ... the 'everything collects one inch of water' argument ... but they then cannot follow that analogy through.

They take a wholly contrary view and claim sensors have bigger/smaller buckets that 'overflow' ... despite believing analogous examples in which they get equal exposure/density.

Tackle their misconceptions - not the ones you seemingly believe I have.

Despite your confusion with density/measure - I know everything gets equal exposure (an 'inch' per area if you wish) and that differing noise/IQ issues are down to less effective photosites/processors .... not APS 'smaller buckets' that overflow or contain smaller 'exposure' amounts.

Explain it to them - not me. Thank You. :-D
 
Last edited:
Tom2572 wrote:
Photozopia wrote:

... and should not be used as analogy .... and you lot should stop wasting your time holding contradictory views based upon it. Or accept that equal AREA gets equal exposure and processor efficiency is what you are all trying to reconcile in 'noisier' images .... as in RGBaker post above :-D
Container sizes (measured as length x width with vertical sides) have no effect on the depth of water (if rain is a constant), and bigger containers do hold more water, so yes, we can have it both ways. As long as neither container overflows, all bigger containers have is an increased volume of water at the end of a set period of time.

I'm sorry if you don't see how that analogy has anything to do with the light collecting ability of a larger frame sensor of 16mp compared to a smaller frame sensor of 16mp using the same exposure but to me it's as clear as day.
You really do not get the concept of overall exposure by ratio of SURFACE area do you?
 
Last edited:
I think both of you guys are right, and just talking past each other.
 

Photozopia wrote:
RGBaker wrote:
This is why the last thread descended into chaos - everyone saying 'container size' has no effect on the depth of water - 'everything collects one inch' .... but then contradict their analogy by saying bigger 'buckets' hold more water ... or that smaller ones overflow .... when they believe 'one inch' is collected equally in each.

Can't have it both ways - either everything gets the same exposure by AREA (an inch of water - no matter the size of receptacle) or it doesn't.

Everything gets equal exposure (rain/light/snow/duck-sh*t hits).

Water certainly does not act as light, nor in analogy ....

Which is why I 'retitled' the last thread ... used as the title here, in continuity - "Water is not light".

It isn't ... and should not be used as analogy .... and you lot should stop wasting your time holding contradictory views based upon it. Or accept that equal AREA gets equal exposure and processor efficiency is what you are all trying to reconcile in 'noisier' images .... as in RGBaker post above :-D
No, everything does collect one inch of water, regardless of the container size. Rainfall, like light, is measured as density. An inch of rain delivers a 'blanket' of rain an inch thick over everything it falls on. It is measured by its thickness, not its volume -- and so an 1" of rain means the same to every point it fall on, regardless of size. Light is measured the same way, which is why my post above holds, not despite. Both light and rainfall are measured/described as the density of the fall.

In the light comparison, the sensor is responding not to a 'volume' of light but to a density of light.

HTH,
GB
GB - stop with the rainfall analogy - I've told you that you do not read my posts correctly. You confuse my comments on rainfall density - equal exposure ... with Euclidian based measure.

Read it again. Tom, and others, agree with you/me that rainfall is 'equal' exposure ... the 'everything collects one inch of water' argument ... but they then cannot follow that analogy through.

They take a wholly contrary view and claim sensors have bigger/smaller buckets that 'overflow' ... despite believing analogous examples in which they get equal exposure/density.

Tackle their misconceptions - not the ones you seemingly believe I have.

Despite your confusion with density/measure - I know everything gets equal exposure (an 'inch' per area if you wish) and that differing noise/IQ issues are down to less effective photosites/processors .... not APS 'smaller buckets' that overflow or contain smaller 'exposure' amounts.

Explain it to them - not me. Thank You. :-D
So does this mean we agree, that two different sized containers left out in the rain will both be one inch deeper with water if left outside during a rainfall of 1"? Regardless of the container size? (Assuming the containers are straight sided). I'm quite confident I have no confusion on this point, and struggle to understand how I'm reading you incorrectly when you say:
Photozopia wrote:

Tom - forgot to address an essential point in your 'water' argument.

You said - "if it rains water into a 24x36mm pan and a 15.7x23.6mm pan, at the end of the rain there is an inch of water in each pan"

Sadly incorrect - there would be more than an inch (in metreological measure terms) in the smaller container .... as it's smaller, water rises higher for the same rainfall density. An inch of rainfall generates the same volume of water anywhere - but using different pans alters the measure (only).

Proving, as I said ... that by AREA the smaller photosite receives no less input than a larger one. Rain - light - equal density.
How am I misreading you there? You seem clear and even specific, insisting the rainfall generates more of a rise in the small pan than the large ....


If you are no longer cleaving to the position above, but are now agreeing with me that two different size pans will both be one inch deeper after a one inch rainfall -- we may well be close to agreement! Well, on this point at least ...





Cheers,
GB
 
Photozopia wrote:

Despite your confusion with density/measure - I know everything gets equal exposure (an 'inch' per area if you wish) and that differing noise/IQ issues are down to less effective photosites/processors .... not APS 'smaller buckets' that overflow or contain smaller 'exposure' amounts.

Explain it to them - not me. Thank You. :-D
I outlined in detail the small sensor vs large sensor, small photosite vs large photosite earlier in this thread: (Note that as further point to the argument below, the upcoming 7n is rumored to have half a stop 'better' performance though it is still a 24MP chip. Agreed, this is still a rumor, but at a minimum it might be true -- and this is different from film, where the possibility of increased ISO at same size grain was not possible. At this stage of sensor development, it is still possible to improve the technology ... I expect someday that will reach an end, as it did for film.)

Part of the confusion is that this is one area where film and digital are different:
Historically, film ISO was determined by one thing -- size of the crystals, which correlated to the graininess of an image. Everyone used the same technology, so a 'faster' film always meant a coarser grain. But as this grain size was fixed, the choice to use a film in a large format camera resulted in a correspondingly reduced grain to image size result. i.e. a 400 ASA film used in a 35mm camera delivered a grainy image when enlarged to 8x10; the same 400 ASA film used in a 2 1/4 x 2 3/4 camera delivered a noticeably less grainy 8x10.

Today, we are still in the developing stages of sensor chip technology. Some manufacturers may use modestly different methods to coax more ISO out of their sensor, not as simple as just using larger sensor sites as they would have in the days of film crystals. Doubtless there will come a day when the technology has matured, and each every camera will offer something near identical in individual sensor site performance .... When that day comes, the relationship between low light performance, 'resolution' as is measured against 'graininess', and image quality will reach the simple equation of days past:
Resolution will be directly measured against number of pixels (photo sites) per area, assuming a lens capable of delivering said resolution; low light performance will be the inverse of number of pixels -- a high density sensor like the NEX7 delivers high resolution at the expense of some low light performance, therefore a sensor with the same TOTAL number of photo sites in a larger area (i.e. FF) would deliver 'better' resolution measured against a final same-sized print and better low light performance. A sensor that delivered the same density of photo sites would deliver increased resolution and matched low light capability, that is the increased resolution would come at the expense of low light as the 'grain' became finer.

So we are left with some eternal truths and some shifting ones -- more pixels equals increased resolution, assuming the lens can match the pixels, regardless of sensor size, when an image is printed to the same dimensions. This is film-like, where an 8x10 view camera delivered stunning resolution in a Playboy centrefold against the best possible result from a 35mm camera. Larger pixels deliver better low light performance -- this too is film-like, where coarse grain films outperform fine grain films in low light situations. Where things have changed is in the inability to make matching baselines between devices -- two camera with the same sensor size but a different number of pixels will forever have different resolution capabilities, and different low light performance. Two cameras with different sensor sizes but the same density of pixels may have either the same resolution, or the same low light performance ... but not both. And you can't swap sensors the way you swapped film to change the cameras abilities based on the circumstance.

A FF NEX with 24MP will deliver better resolution than the APS-C NEX, but the first will have better low light performance as each pixel will be correspondingly larger. The VG900 exists today offering exactly that against the NEX7. A (hypothetical) FF NEX with 36MP would deliver even better resolution but the same low light performance as each individual pixel (photo site) would be the same size as an APS-C 24MP.

This of course assumes no 'break throughs' in sensor design, and at this stage that is still a big assumption. Today's sensors are remarkably 'better' than those of ten years ago -- a roll of Kodachrome in 1985 was pretty much the same as a roll in 1975, and the equivalent roll of Fuji film was pretty much the same as any roll of Kodak. Sensor technology will likely continue to advance, so direct model to model comparisons will still be valid, but in general the notion of photo site density being directly related to low light performance will hold.
 
Last edited:
GB - it's not whether we agree on measure v exposure ... merely that we agree on area receiving equivalent ratios of exposure per unit per surface area .... which we do.

But I am trying to comprehend why Tom 2572 thinks rain falls in equal quantity on surfaces - but - using his own analogy believes that for any equivalent exposure, more light falls on the 'smaller' photosite - thus the smaller photosite 'overflows' .... which it can't, as exposure per area is equal.

As I said - I wish you'd explain to him that if he believes his 'buckets' all equally get one inch of rain .... how does a smaller bucket overflow, as it gets the same value of light - i.e no more than it can contain.

(I'm not even gonna point out to him that photon sensor processing in no way equates to water collection :-D - hence 'water is not light' ....)
 
Last edited:
Tom2572 wrote:
GaryW wrote:
blue_skies wrote:

Now, not to confuse anything, but until you hit the max-ISO, you can use FF and crop size sensors similarly, and the results will be quite comparable. This is why P&S cameras can produce such great pictures in the middle of the day, at ISO 100 they are very sharp, with extreme DOF. A larger size sensor has no benefit here other than producing shallower DOF (which may even be undesirable).
I wish this were completely true, but I have P&S photos with blown highlights that say otherwise. ;-)
Understood, the smaller pixels are known to oversaturate (blown highlights).
Seriously, look at DxO, and it appears to me that larger sensors have more dynamic range. Here again, I think it's the advantage of larger photosites/pixels -- you can capture more "water" with the same exposure. ;-) The advantage is not just less noise at high ISO, but a bigger bucket to work with gives finer control.
Yes, although the smaller sensor has just a shorter bucket that overflows. You don't capture 'more' in the larger sensor/bucket, you merely don't overflow your bucket.

In PP, you can get all the details from a partially filled bucket (push shadows).
LOL, after all the heartburn someone finally acknowledging my esoteric rainfall analogy....
Water is useful for dynamic analysis analogies. But we seem to get confused about rainfall density (inch per hour) and container volume (cubic inch).

Total rainfall is rainfall density * time (linear inch)

Container volume is total rainfall (height in inches) * container surface area (area in square inches) = volume (cubic inch).

All container sizes contain the same height after the rain. Their height is the same, but their volume is not.

Pixels are sensitive to the height in this analogy, not the volume.

The moment you consider volume, you have to take the area into account. In the end, you are only comparing the height value in comparisons.

I repeat this one more time, bigger pixels collect more volume, but the height of each collection of each pixel sizes is always the same, regardless of pixel size.

I don't think that this is English anymore, but I am sure that you get the idea :)




 
RGBaker wrote:

A FF NEX with 24MP will deliver better resolution than the APS-C NEX, but the first will have better low light performance as each pixel will be correspondingly larger. The VG900 exists today offering exactly that against the NEX7. A (hypothetical) FF NEX with 36MP would deliver even better resolution but the same low light performance as each individual pixel (photo site) would be the same size as an APS-C 24MP.
This last point is only partly correct. The 36MP sensor will have more noise per pixel, but since it has more resolution, this noise is spread out over a larger spatial frequency range.

So, if I choose, I can always low-pass filter the 36MP image down to 24MP and get the better low light performance if I desire (and this will actually give better performance than a native 24MP because native 24MP is essentially an inefficient type of low pass filter whereas the 36MP image could get the benefit of a much better low pass filter.)

Bart
 
blue_skies wrote:
Seriously, look at DxO, and it appears to me that larger sensors have more dynamic range. Here again, I think it's the advantage of larger photosites/pixels -- you can capture more "water" with the same exposure. ;-) The advantage is not just less noise at high ISO, but a bigger bucket to work with gives finer control.
Yes, although the smaller sensor has just a shorter bucket that overflows. You don't capture 'more' in the larger sensor/bucket, you merely don't overflow your bucket.
Yes, but in practice systems tend to offer the same F-number lenses. So the larger sensor does capture more and will overflow at the same amount of brightness. In this case, the larger sensor lets you go deeper into the shadows.

Bart
 
Bart Hickman wrote:
RGBaker wrote:

A FF NEX with 24MP will deliver better resolution than the APS-C NEX, but the first will have better low light performance as each pixel will be correspondingly larger. The VG900 exists today offering exactly that against the NEX7. A (hypothetical) FF NEX with 36MP would deliver even better resolution but the same low light performance as each individual pixel (photo site) would be the same size as an APS-C 24MP.
This last point is only partly correct. The 36MP sensor will have more noise per pixel, but since it has more resolution, this noise is spread out over a larger spatial frequency range.

So, if I choose, I can always low-pass filter the 36MP image down to 24MP and get the better low light performance if I desire (and this will actually give better performance than a native 24MP because native 24MP is essentially an inefficient type of low pass filter whereas the 36MP image could get the benefit of a much better low pass filter.)

Bart
 
Photozopia wrote:
Tom2572 wrote:
Photozopia wrote:

... and should not be used as analogy .... and you lot should stop wasting your time holding contradictory views based upon it. Or accept that equal AREA gets equal exposure and processor efficiency is what you are all trying to reconcile in 'noisier' images .... as in RGBaker post above :-D
Container sizes (measured as length x width with vertical sides) have no effect on the depth of water (if rain is a constant), and bigger containers do hold more water, so yes, we can have it both ways. As long as neither container overflows, all bigger containers have is an increased volume of water at the end of a set period of time.

I'm sorry if you don't see how that analogy has anything to do with the light collecting ability of a larger frame sensor of 16mp compared to a smaller frame sensor of 16mp using the same exposure but to me it's as clear as day.
You really do not get the concept of overall exposure by ratio of SURFACE area do you?
I do, actually. Volume of water doesn't have to do with anything really except to show the difference in surface area. What is important, as you noted, is the "ratio", and that is the ratio of pixels to surface area, or corn stalks to surface area, or rain to surface area, or whatever.

When you spread out 16 million pixels on a sensor with a larger surface area, the pixels themselves get to be larger than if you spread out 16 million pixels on a smaller surface area.
 
Photozopia wrote:

GB - it's not whether we agree on measure v exposure ... merely that we agree on area receiving equivalent ratios of exposure per unit per surface area .... which we do.

But I am trying to comprehend why Tom 2572 thinks rain falls in equal quantity on surfaces - but - using his own analogy believes that for any equivalent exposure, more light falls on the 'smaller' photosite - thus the smaller photosite 'overflows' .... which it can't, as exposure per area is equal.

As I said - I wish you'd explain to him that if he believes his 'buckets' all equally get one inch of rain .... how does a smaller bucket overflow, as it gets the same value of light - i.e no more than it can contain.

(I'm not even gonna point out to him that photon sensor processing in no way equates to water collection :-D - hence 'water is not light' ....)
I have not once, or ever, made any reference to overflowing photo sites. Where on earth are you getting this from?

I can only respond to the things I actually say, not statements you are making up and attributing to me......

Here is a decent article that will hopefully explain to you what I am not able to it seem:

 
Last edited:
RGBaker wrote:
Bart Hickman wrote:
RGBaker wrote:

A FF NEX with 24MP will deliver better resolution than the APS-C NEX, but the first will have better low light performance as each pixel will be correspondingly larger. The VG900 exists today offering exactly that against the NEX7. A (hypothetical) FF NEX with 36MP would deliver even better resolution but the same low light performance as each individual pixel (photo site) would be the same size as an APS-C 24MP.
This last point is only partly correct. The 36MP sensor will have more noise per pixel, but since it has more resolution, this noise is spread out over a larger spatial frequency range.

So, if I choose, I can always low-pass filter the 36MP image down to 24MP and get the better low light performance if I desire (and this will actually give better performance than a native 24MP because native 24MP is essentially an inefficient type of low pass filter whereas the 36MP image could get the benefit of a much better low pass filter.)

Bart
 
It suddenly occurred to me that the reason everyone is so confused about this topic is they haven't comprehended that there are two parts to this light/image discussion. One: is the total exposure or the light per square area that falls on the sensor -- and -- Two: how much of that light gets converted to electricity and then digitized. The BIG mistake is to try to make this a perfect equation in which total exposure=total digital signal. That is where the water and container analogy falls down. The truth is that as the individual receptors get smaller they become less responsive (for a number of reasons) and do not produce as much signal per photon compared to larger receptors. Again, this has nothing to do with the overall size of the chip. The only correlation is that smaller chips tend to have more pixels squeezed on them. The total area of the chip is meaningless when talking about noise, the limiting factor is pixels per area, also known as "pitch."

If we go back to to where this discussion started, Can we make one fact clear: the f-stop is a standard unit, based on the physics of light and lenses, that always holds true, and as far as basic exposure goes, an f-stop on one camera/lens means the same thing on another. If you doubt this, you need to go study a bit more about basic photography.
 

Keyboard shortcuts

Back
Top