Equivalence and dynamic range

Iuvenis

Senior Member
Messages
1,519
Solutions
1
Reaction score
2,350
Equivalence is a subject that has often been given an airing on DPR. As a result, it is now widely (but not universally!) realised that an m43 camera will yield similar results - in terms of field of view, depth of field, noise and several other parameters - to a full frame camera stopped down two stops and with a lens of half the focal length.

Equivalence is useful at comparing what different camera and lens combinations can offer a photographer. For example, a full frame camera with an f2.8 lens and an m43 camera with an f1.4 lens, both using similar sensor technology, can offer similar low light performance, similarly narrow depth of field etc.

When reading previous articles about equivalence, I noticed that dynamic range was not mentioned. I assumed this was one of those parameters, like resolution, that was not related to equivalence. I therefore assumed that full frame cameras offered more dynamic range.

However, on further examination, I realised that this was the old way of thinking, just like saying that full frame cameras are better in low light.

When a full frame camera and m43 camera are set up with equivalent settings, the iso on the full frame camera will be set two stops higher to achieve the same exposure. As a result, the full frame camera will offer less dynamic range than it would at the same (unequivalent) aperture setting.

Looking at the Photons to Photos site, it is clear that this reduction in dynamic range means that the full frame camera will usually offer about the same dynamic range at equivalent settings to smaller sensor cameras (though it will in practice offer settings for which there is no equivalent). For example, see this graph of the GX80, A6500 and a7rii: http://www.photonstophotos.net/Charts/PDR.htm

At iso 200, the GX80's PDR is 8.96, while the A6500 at iso 400 is 8.84, and the a7rii at iso 800 is at 8.38.

Now, I appreciate that dynamic range does vary based on sensor technology, not just sensor size and iso setting, but so does the effect of noise.

As a rule of thumb, I therefore consider that at equivalent settings, sensors of different size but similar technology do end up with similar DR.

Do you agree? Or is this too sensor-dependent to be a viable rule of thumb?
 
Equivalence theory is based on and is limited to geometry; namely, similar triangles.

While the effects of diffraction and photon shot noise are implied by the geometry, other things certainly are not included in the theory, and that includes read noise and lens aberrations, nor is information theory included in equivalence, and so megapixels play no role.

--
http://therefractedlight.blogspot.com
 
Last edited:
Normalized dynamic range such as Photographic Dynamic Range (PDR) at PhotonsToPhotos will always be better for the sensor with the larger area if they are the same technology. (The same applies to DxOMark Landscape score).

The correlation is pretty clear if you look at PDR versus Sensor Area.

470fab33421c47298933106d2fb76053.jpg.png

Of course there is a lot of overlap and you can definitely find smaller sensors that outperform larger ones.
But the best in class will always favor the larger area.

Separately, I'm not a big fan of equivalence as it is often misapplied.

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )
 
Last edited:
Equivalence is a subject that has often been given an airing on DPR. As a result, it is now widely (but not universally!) realised that an m43 camera will yield similar results - in terms of field of view, depth of field, noise and several other parameters - to a full frame camera stopped down two stops and with a lens of half the focal length.

Equivalence is useful at comparing what different camera and lens combinations can offer a photographer. For example, a full frame camera with an f2.8 lens and an m43 camera with an f1.4 lens, both using similar sensor technology, can offer similar low light performance, similarly narrow depth of field etc.

When reading previous articles about equivalence, I noticed that dynamic range was not mentioned. I assumed this was one of those parameters, like resolution, that was not related to equivalence. I therefore assumed that full frame cameras offered more dynamic range.

However, on further examination, I realised that this was the old way of thinking, just like saying that full frame cameras are better in low light.

When a full frame camera and m43 camera are set up with equivalent settings, the iso on the full frame camera will be set two stops higher to achieve the same exposure. As a result, the full frame camera will offer less dynamic range than it would at the same (unequivalent) aperture setting.

Looking at the Photons to Photos site, it is clear that this reduction in dynamic range means that the full frame camera will usually offer about the same dynamic range at equivalent settings to smaller sensor cameras (though it will in practice offer settings for which there is no equivalent). For example, see this graph of the GX80, A6500 and a7rii: http://www.photonstophotos.net/Charts/PDR.htm

At iso 200, the GX80's PDR is 8.96, while the A6500 at iso 400 is 8.84, and the a7rii at iso 800 is at 8.38.

Now, I appreciate that dynamic range does vary based on sensor technology, not just sensor size and iso setting, but so does the effect of noise.

As a rule of thumb, I therefore consider that at equivalent settings, sensors of different size but similar technology do end up with similar DR.

Do you agree? Or is this too sensor-dependent to be a viable rule of thumb?
DR is the number of stops from the noise floor to the saturation point over a specified area. So, let's say we have a FF sensor and mFT sensor that have identical pixels. By "identical pixels" I mean they have the same:
  • Size
  • QE (Quantum Efficiency -- record the same proportion of light falling on them)
  • Electronic Noise (noise added by the sensor and supporting hardware)
  • Saturation Limit (can record the same amount of light before oversaturating -- becoming "blown")
In this case, the DR/pixel for the same exposure will obviously (well, I hope it's obvious ;-) ) be the same.

However, photographically, it is far more meaningful to compare the DR over the same proportion of the photo, e.g., the "aggregate DR" of four FF pixels vs one mFT pixel. Do to the fact that noise adds in quadrature, four FF pixels will have twice the electronic noise as one mFT pixel (as opposed to 4x the electronic noise). However, they'll record 4x as much light. This will mean FF will have one stop more DR for the same exposure, using the electronic noise as the noise floor.

But what about Equivalent photos? In this case, the four FF pixels will record the same amount of light as one mFT pixel, but will have twice the electronic noise, and thus have one stop *less* DR (again, using the electronic noise as the noise floor). This DR deficit will be lessened if we raise the noise floor.

Of course, FF sensors aren't made with the same pixels as mFT sensors and the electronic noise can vary tremendously as a function of the ISO setting. So, in reality, the actual differences in DR can get kind of messy due to both differences in pixel count and differences in the electronic noise -- more so if we use the electronic noise as the noise floor.

It's important to understand, however, that the same DR does not mean the same noise characteristics for the photo. Sensors with greater DR will produce photos with less noisy shadows, but the rest of the photo may actually be more noisy than the photo with less DR. In the end, DR is only important if you are pushing the shadows a lot. If the shadows are not being pushed, then DR is not a very important metric.

However, it's worth mentioning that for each stop you raise the ISO setting, you are putting "one stop more of the photo" in shadow, so the shadows become increasingly important as the ISO setting is raised. That's not to say that the shadows might not be very important at base ISO; rather, it is to say that unless you're pushing the shadows at base ISO, base ISO DR isn't all that important, but as you raise the ISO setting, DR becomes increasingly more important.
 
Last edited:
Equivalence is a subject that has often been given an airing on DPR. As a result, it is now widely (but not universally!) realised that an m43 camera will yield similar results - in terms of field of view, depth of field, noise and several other parameters - to a full frame camera stopped down two stops and with a lens of half the focal length.

Equivalence is useful at comparing what different camera and lens combinations can offer a photographer. For example, a full frame camera with an f2.8 lens and an m43 camera with an f1.4 lens, both using similar sensor technology, can offer similar low light performance, similarly narrow depth of field etc.

When reading previous articles about equivalence, I noticed that dynamic range was not mentioned. I assumed this was one of those parameters, like resolution, that was not related to equivalence. I therefore assumed that full frame cameras offered more dynamic range.

However, on further examination, I realised that this was the old way of thinking, just like saying that full frame cameras are better in low light.

When a full frame camera and m43 camera are set up with equivalent settings, the iso on the full frame camera will be set two stops higher to achieve the same exposure. As a result, the full frame camera will offer less dynamic range than it would at the same (unequivalent) aperture setting.

Looking at the Photons to Photos site, it is clear that this reduction in dynamic range means that the full frame camera will usually offer about the same dynamic range at equivalent settings to smaller sensor cameras (though it will in practice offer settings for which there is no equivalent). For example, see this graph of the GX80, A6500 and a7rii: http://www.photonstophotos.net/Charts/PDR.htm

At iso 200, the GX80's PDR is 8.96, while the A6500 at iso 400 is 8.84, and the a7rii at iso 800 is at 8.38.

Now, I appreciate that dynamic range does vary based on sensor technology, not just sensor size and iso setting, but so does the effect of noise.

As a rule of thumb, I therefore consider that at equivalent settings, sensors of different size but similar technology do end up with similar DR.

Do you agree? Or is this too sensor-dependent to be a viable rule of thumb?
If one works from a simple scaling model, that a big sensor is simply a small sensor with all the details enlarged proportionally, one gets the interesting result that DR is invariant over sensor size. The reason is that both saturation capacity and conversion gain scale with area, so as the saturation goes down as the sensor gets smaller, so does the read noise.

Practice is, however, somewhat different. In reality, different sensor sizes are not produced by linear scaling. What seems observationally to be the case is that the sensor manufacturers have libraries of layouts for the key components, and assemble them into the required sensor. So, for instance, a 6 μm and 4 μm pixel might have the same readout components, giving them the same conversion gain and read noise. However, a 1μm pixel is unlikely to use the same readout as a 4μm pixel (unless made by Canon).

The outcome of the above is that larger sensors are likely to have higher absolute DR than smaller sensors with the same pixel count and that high pixel count sensors will tend to have higher absolute DR than low pixel count sensors of the same size, as pointed out by GB.
 
Equivalence theory is based on and is limited to geometry; namely, similar triangles.

While the effects of diffraction and photon shot noise are implied by the geometry, other things certainly are not included in the theory, and that includes read noise and lens aberrations, nor is information theory included in equivalence, and so megapixels play no role.
 
Thanks for replying, not least because it was playing with your graphs that lead to my initial post!

I appreciate the correlation between DR and sensor size, but this is of course maximum DR and sensor size. At the same settings, a full frame camera will normally have more DR than an m43 camera, for example.

However, DR also decreases as iso increases, so if you stop down a camera with a full frame sensor to benefit from the same depth of field as a camera with an m43 sensor, and increase the iso by two stops, you will end up with losing your DR advantage.
 
I can see that read noise and megapixel counts introduce a complication, but I am not sure that it is as significant an issue in the real world. For example, if you use the 12 megapixel A7s rather than the 42 megapixel A7rii as the full frame example on the dynamic range chart I linked to in my original post, the effect will not be much different.

We are used to the idea that cameras of the same sensor size will have similar low light performance, even though they have different pixel counts. I don't see much empirical evidence to support the notion that increased pixel counts reduce dynamic range, and plenty to counter it (eg. the superb dynamic range of the a7riii and D850). On the other hand, lower pixel counts do not seem to increase dynamic range at normal settings.

For that reason, is it not a viable rule of thumb to say that dynamic range is equivalent between sensor sizes at equivalent settings?

Put it another way. The larger sensor is receiving 4 times as much light as the smaller sensor at the same settings, and the same amount of light at equivalent settings. The light falling on the part of the sensor containing the shadow areas will also be the same in each case, so the dynamic range will be similar in each case.
 
Last edited:
I can see that read noise and megapixel counts introduce a complication, but I am not sure that it is as significant an issue in the real world. For example, if you use the 12 megapixel A7s rather than the 42 megapixel A7rii as the full frame example on the dynamic range chart I linked to in my original post, the effect will not be much different.

We are used to the idea that cameras of the same sensor size will have similar low light performance, even though they have different pixel counts. I don't see much empirical evidence to support the notion that increased pixel counts reduce dynamic range, and plenty to counter it (eg. the superb dynamic range of the a7riii and D850). On the other hand, lower pixel counts do not seem to increase dynamic range at normal settings.

For that reason, is it not a viable rule of thumb to say that dynamic range is equivalent between sensor sizes at equivalent settings?

Put it another way. The larger sensor is receiving 4 times as much light as the smaller sensor at the same settings, and the same amount of light at equivalent settings. The light falling on the part of the sensor containing the shadow areas will also be the same in each case, so the dynamic range will be similar in each case.
How do you define dynamic range? Are concerned only with shadow noise, or is highlight clipping a consideration?

Do you require that ISO is set for appropriate out-of-camera brightness, or is it simply a convenient means to manage read noise and saturation? I see no reason to constrain ISO when defining "equivalent settings".

My understanding is that "equivalent" exposure have the same:
  • Exposure time
  • Physical aperture
  • Field of view and perspective
As a result, they capture the same total light (total number of photons), and the same light in a defined fraction of the image field of view. ISO has no direct influence.

From this point of view, an f/4 exposure at 1/1000 s at ISO 1600 is equivalent to an f/4 exposure at 1/1000 s at ISO 400 in the same camera. With a near-isoless camera such as the Nikon D850, there is only 1/4 stop difference in read noise (1.4 e- compared with 1.7 e-), but shooting at ISO 400 gives an additional 2 stops of highlight headroom, and very nearly 2 stops of PDR (9.81 c.f. 7.85).

More generally, cameras using similar technology tend to saturate with broadly similar numbers of photons per unit area (rather than similar photons per pixel). Available read noise values below 2-3 electrons per pixel mean that the read noise penalty for shooting at reduced ISO can be minimal. All but the deepest shadow noise is dominated by photon noise, rather than read noise.

With freedom to choose an appropriate ISO, large sensor cameras can offer proportionately higher dynamic range than equivalent exposures in small sensor cameras.

For wide dynamic range, pushing shadows and pulling highlights from an 8-bit JPEG is not the best way to go. If you do need out of camera JPEG images with acceptable brightness, choose a camera which offers a choice of tone curves which protect highlights and/or shadow details.

--
Alan Robinson
 
Last edited:
As a rule of thumb, I therefore consider that at equivalent settings, sensors of different size but similar technology do end up with similar DR.

Do you agree? Or is this too sensor-dependent to be a viable rule of thumb?
Noise equivalence is usually spoken about photon noise, which, follows equivalence very well with equal QE. Noise is not just about photons, though. Noise is also about pre-gain and post-gain read noise.

Pre-gain read noise is in equivalence when it is the same, per unit of sensor area. This would be the case, for example, if the same pixels and read out quality existed and the larger sensor simply had a larger canvas of the same pixel quality. This is not directly tied to sensor size, but many of the cameras with the edge in read noise per unit of sensor area today have the largest pixels, but some, like those in the Nikon D500, are not among the largest, but still down there with some of the lowest pre-gain read noise. A new glitch now exists for pre-gain read noise with cameras that increase conversion gain abruptly at a higher ISO (which the D500 does from ISO 400 to 500, IIRC); so, you have to consider the pre-gain read noise at the ISO setting used; not just the absolute exposure.

Post-gain read noise has absolutely nothing to do with absolute signal, and is just there, like a watermark added to a frame's digitization, regardless of the gain which precedes it. The only thing that varies it relative to signal is the relative exposure, which might be affected by 1/3 stop pushed and pulled ISOs (and things like Canon's HTP, which alone doubles it). So, on cameras that use 1/3-stop pushes which have pretty much the same post-gain noise, and you have a 1.33-stop difference in ISO in equivalence, things can flip in favor for either camera depending on the exact ISO setting and its method of derivation. For APS-C 1.6x, we might have:

ISO 100 vs ISO 250; 1/3 stop in favor of FF.

ISO 125 vs ISO 320; 1/3 stop in favor of FF.

ISO 160 vs ISO 400; 2/3 stop in favor of APS-C.

In cameras where post-gain read noise is high, it is the main determinant of DR at the lowest ISOs, along with any clipping from ISO pushes and other under-the-hood pushes.

Modern small sensors generally walk all over older FF ones in equivalence; for example, compare the Nikon D500 (APS-C 1.5) and the the Canon 1Ds (FF). The D500 has much less noise at the same ISO with 44.4% the sensor area, with less spatially-correlated read noise to boot, not even counting the fact that in equivalence the ISO would be 2.25x as high on the 1Ds.

When the f-number gets very low (2.5 or lower), the microlenses usually start losing light, like the QE was actually lowered, and this is less pronounced for a camera with larger pixels (usually among the larger sensors of the time). Cameras usually push the original RAW data and lose DR to compensate. This loss could be gone with future designs, though.

On the low-f-number side of the lens "sweet spot", a larger sensor also tends to get more of the lens' resolution potential which is lost in the diffraction-limited range higher than the sweet spot in equivalence. This can mean better SNR at the pixel frequency, despite no benefit at lower frequencies. That gain is kind of bogus, though, if the smaller sensor has more pixel resolution.

So, you can see that there are many factors that affect actual SNR, and consequently, DR, in equivalence, and the variations are more camera-dependent than they are sensor-size-variant, per se.

If you operate in the higher range of RAW tonal levels where read noise is negligible, and you have deep DOF, things get much simpler, as QE is the main factor in noise equivalence.
 
Normalized dynamic range such as Photographic Dynamic Range (PDR) at PhotonsToPhotos will always be better for the sensor with the larger area if they are the same technology. (The same applies to DxOMark Landscape score).

The correlation is pretty clear if you look at PDR versus Sensor Area.

Of course there is a lot of overlap and you can definitely find smaller sensors that outperform larger ones.
But the best in class will always favor the larger area.
... which usually has more pixels and/or cleaner post-gain readout channels (which are not affected by the size of the pixels and number of electrons from which the data is drawn); the typical real reasons for high base-ISO DR. ;)

Post-gain read noise has zero/nada/zilch benefit from greater sensor size. Only photon noise and image-level pre-gain read noise track well with sensor size, and post-gain read noise is the major limiter of DR at base ISO for most cameras, even in PDR which takes some photon noise into account, combined with read noise. At higher ISOs, the larger sensors start to do better, depending less on post-gain read noise, but this generic benefit is lost in equivalence.

The Nikon D810 is probably one of the few mavericks that are only slightly affected by post-gain noise for base-ISO DR. Many cameras, even though they seem like they have almost 2x the read noise at double base ISO, have the same level of chunkier, more banded noise components which are more visible than their isolated noise measurements would suggest. Other cameras now have less pre-gain read noise, though, so it is not the high ISO champ.
 
Normalized dynamic range such as Photographic Dynamic Range (PDR)
?? Normalized for what? How? Perhaps it's my ignorance, but I thought it was not normalized for anything, and I don't see anything on the page that indicates that it's normalized.
PDR is already image-normalized before the points appear on the X/Y scatter chart.

I would expect more correlation with pixel count, though, and I would expect that correlation to work even better if the dots were color-coded by pixel-level post-gain read noise, or even year of manufacture, so that correlation would appear tighter for similar colors. Only when post-gain read noise gets very, very low, compared to pre-gain read noise at base ISO, does sensor size tend to be a main causal factor with base-ISO DR. The D810 comes very close, but still is not completely there. As pixel density increases, pixel-level pre-gain read noise can increase more than pixel-level post-gain read noise so it's a moving target.
 
Thanks for replying, not least because it was playing with your graphs that lead to my initial post!
What is PDR, though? It is not a proxy for noise through the tonal range; DxO-DR isn't, either. All DR tells you is that some standard of noise is met some number of stops below saturation; it tells you absolutely nothing about the SNR in the tonal ranges above that, especially the highest RAW tonal levels.

I fear that people use DR metrics as proxies for noise, but they only represent noise in a narrow band of tones, usually quite a bit below those used in a typical photograph, and the reach of the factors that lower the DR do not necessarily reach up well into the higher tones.
I appreciate the correlation between DR and sensor size, but this is of course maximum DR and sensor size. At the same settings, a full frame camera will normally have more DR than an m43 camera, for example.
It usually also has more pixels, though. Don't rule that out as a major factor!
However, DR also decreases as iso increases, so if you stop down a camera with a full frame sensor to benefit from the same depth of field as a camera with an m43 sensor, and increase the iso by two stops, you will end up with losing your DR advantage.
Generically, yes. As I said in another post, though, it's not just QE and AOV/DOF that can be in equivalence.
 
I can see that read noise and megapixel counts introduce a complication, but I am not sure that it is as significant an issue in the real world. For example, if you use the 12 megapixel A7s rather than the 42 megapixel A7rii as the full frame example on the dynamic range chart I linked to in my original post, the effect will not be much different.
If it isn't much different for normalized noise, keep in mind that noise exists in a spectrum, and the 42MP camera has more real SNR at higher levels of detail.
We are used to the idea that cameras of the same sensor size will have similar low light performance, even though they have different pixel counts. I don't see much empirical evidence to support the notion that increased pixel counts reduce dynamic range,
There is none, at low ISOs. At high ISOs, sometimes bigger pixels get less image-normalized pre-gain read noise (but more post-gain read noise, unfortunately). That is not a law of physics guaranteed into the future, though, and the first sensor to have no practical read noise at all at room temperature just recently announced has 1.1u pixels, equivalent to 750MP FF. The actual sensor is very small, but if you filled a FF with this, the practical DR improvement would be astronomical, and "engineering DR" would react to this much more than something like PDR would, as PDR kind of assumes that read noise and photon noise are different quantities of the same thing, which they truly are not when signal level is less than a dozen photons, or when there is any spatially correlated noise in any read noise that might be present (DxO completely ignores that correlated aspect as well, except to detect mild RAW noise filtering).
and plenty to counter it (eg. the superb dynamic range of the a7riii and D850). On the other hand, lower pixel counts do not seem to increase dynamic range at normal settings.
They couldn't, unless the post-gain noise was tiny and the pre-gain read noise benefit was somehow huge as a result of using a lower pixel density, which I've already mentioned, has an expiration date on its relevance.
For that reason, is it not a viable rule of thumb to say that dynamic range is equivalent between sensor sizes at equivalent settings?
At base ISO, it has little or nothing to do with sensor or pixel size. Pixel count, and readout technology are what usually make the most difference. At high ISOs, DR does correlate better with sensor size, as post-gain noise plays a smaller role.
Put it another way. The larger sensor is receiving 4 times as much light as the smaller sensor at the same settings, and the same amount of light at equivalent settings. The light falling on the part of the sensor containing the shadow areas will also be the same in each case, so the dynamic range will be similar in each case.
On average, yes, perhaps, but there are so many camera-specific wild cards, as I list in more detail in another post.
 
Equivalence theory is based on and is limited to geometry; namely, similar triangles.

While the effects of diffraction and photon shot noise are implied by the geometry, other things certainly are not included in the theory, and that includes read noise and lens aberrations, nor is information theory included in equivalence, and so megapixels play no role.
 
If you prefer, read my post as limited to the effect of photon shot noise on dynamic range in sensors of different sizes set to equivalent settings. Ignoring read noise, will dynamic range also be equivalent?
Dynamic range is defined as the ratio between the largest signal that can be read by the sensor and the lowest signal that has tolerable noise.

That second factor is both arbitrary, in that some folks have more noise tolerance than others, and it is determined mainly by read noise, which differs according to sensor technology.

So in Equivalence discussions the caveat is always that the sensors have the same technology and the same number of megapixels, otherwise you aren't making a pure comparison using only geometry.

Equivalence theory only considers scale, which is its main power but also its main limitation.

--
http://therefractedlight.blogspot.com
 
Last edited:
Equivalence theory is based on and is limited to geometry; namely, similar triangles.

While the effects of diffraction and photon shot noise are implied by the geometry, other things certainly are not included in the theory, and that includes read noise and lens aberrations, nor is information theory included in equivalence, and so megapixels play no role.
If you prefer, read my post as limited to the effect of photon shot noise on dynamic range in sensors of different sizes set to equivalent settings. Ignoring read noise, will dynamic range also be equivalent?
Assuming the same lens, aspect ratio and effective QE, the aggregate number of photoelectrons out of sensors of different formats from captures set up 'equivalently' is the same (see here for a derivation) - therefore the aggregate shot noise is also the same. You can get per pixel signal in e- by dividing aggregate signal by the number of pixels in each sensor.

However, as hinted by Alan, DR does not normally refer to a constrained signal. It is instead defined as

maximum acceptable signal divided by minimum acceptable signal.

Minimum acceptable signal today still depends on read noise. Maximum acceptable signal normally depends on the clipping level. You should now be able to answer your question according to your own definition of DR :-)

Jack
 
Normalized dynamic range such as Photographic Dynamic Range (PDR)
?? Normalized for what? How? Perhaps it's my ignorance, but I thought it was not normalized for anything, and I don't see anything on the page that indicates that it's normalized.
?? A sensor 29 microns on a side? That's an incredibly tiny sensor. Perhaps you mean sqrt(24*36) = 29 millimeters, not microns.
Good catch on the microns; it has been fixed :-)

PDR is a "normalized" measure because it's adjusted to specific viewing conditions.

--
Bill ( Your trusted source for independent sensor data at PhotonsToPhotos )
 
Normalized dynamic range such as Photographic Dynamic Range (PDR)
?? Normalized for what? How? Perhaps it's my ignorance, but I thought it was not normalized for anything, and I don't see anything on the page that indicates that it's normalized.
PDR is already image-normalized before the points appear on the X/Y scatter chart.

I would expect more correlation with pixel count ...
Well, it's area rather than pixel count.

But more to the point it's over quite a time-span so 3D by time or the ability to filter by time might be revealing.
 

Keyboard shortcuts

Back
Top