Dynamic Range -- what it is, what it's good for, and how much you 'need'

But it is not because the pixels are larger per se, but rather because Canon does not yet have the capability to make the tech with smaller pixels. However, as I said, I think it has more to do with maintaining the high frame rate than the inability to make smaller pixels.
I've no idea whether Canon has the capability or not but this is not the reason they reduced the pixel count of the G11 cf. the G10.
...but maybe it is. The G10 is awfully slow in focus and the shutter lag is a real killer. As far as I know, the G11 and 12 are much faster... of course, they are no DSLRs, but what people say is that they are considerably faster and I have no reason to doubt that information since 50% less pixel data plus added internal processing capacity due to technical evolution makes a big difference. Otherwise the G10 pixels are indeed very nice and the camera is fantastic.
 
Nice demo on the weirdest caravan I have ever seen John.

Do you own all three cameras ?

The D7000 is just a revelation: I almost got a d90 but being an OM owner from before, an E450 with three lenses suited my budget.

So I would love to have a d7000 and a very complementary mFT camera.

I actually find, like days of film, I enjoy the challenge of the limitations of my gear and the use of tripods, flash and reflectors to make up for DR and then post proc' to I stretch or if you must, do HDR, then do it manually.

I think this thread has been worth reading and contributing to, although it started a little off-oly, it has its contetx now in place.

--
================================
Enjoying Photography like never before with the E-450!
Images, photo and gimp tips:
http://olympe450rants.blogspot.com/

NORWEGIAN WOOD GALLERY
http://fourthirds-user.com/galleries/showgallery.php/cat/888

Olympus' Own E450 Gallery http://asia.olympus-imaging.com/products/dslr/e450/sample/

"to be is to do" Descartes;
"to do is to be" Satre ;

............................"DoBeDoBeDo" Sinatra.
=============================
 
Then consider that the relative size difference between FourThirds and APS-C is considerably less than that between APS-C and 135.
From 24x36 to 15.8x23.6 is about 1.2 stops.
From 15.8x23.6 to 13x17.3 is about 0.73 stops.
(Using "stops" to denote "noise equalisation through ISO change" and not exposure, obviously.)

That's for "1.5x" APS-C.

Canon APS-C is 14.8 x 22.2 mm, or 1.4 and 0.55 "stops" respectively.

These calculations ignore the additional aspect ratio efficiency of 4:3 over 3:2, which is roughly 4%. This changes e.g. the Canon APS-C to FT ratio from 0.55 to 0.49 "stops", and the 135 to FT ratio from 1.94 to 1.89 "stops".
Do you really think that calls for an adverb like 'considerably'?
Is 1.2 litres considerably more than 0.73 litres? It's not that far from double the amount, Rikke.

I would agree that a 0.5 stop difference in DR may be of limited significance in real terms. The Olympus cameras from the E-P1 through to the E-PL3, inclusive of the E-5, only appear to have a DR spread of about 2/3 stop. This is of little significance compared to the other differences present -- detail, contrast, relative brightness levels (presumably due to the tone curve) and so on.
How about 'APS-C is slightly closer to Four-Thirds than it is to full frame'?
How about "APS-C is twice as close to FourThirds as it is to 135"?

This is especially true if you want to factor in the 30% or so Canon APS-C sensor share, which is nearly three times as close to FT as 135.
For people not hung up on insignificant decimals it is roughly midway between FT and FF, roughly a stop from each.
You should ask Joe if that degree of "roughness" is acceptable. In practice a half-stop may not amount to much, but if you are trying to calculate things to a 0.1 stop resolution then the more accurate your starting measurements the more accurate your result will be.

By your own math, that's a 0.47 "stop" difference (or 0.85 "stop" for Canon APS-C). Hardly insignificant if we are to take DxO's 0.1 stop resolution DR numbers as meaningful.
 
Lots of verbiage elided.
From 24x36 to 15.8x23.6 is about 1.2 stops.
From 15.8x23.6 to 13x17.3 is about 0.73 stops.
(Using "stops" to denote "noise equalisation through ISO change" and not exposure, obviously.)
And?
That's for "1.5x" APS-C.
That's for Nikon DX, yes. The best APS-C cameras available at the moment.
Canon APS-C is 14.8 x 22.2 mm, or 1.4 and 0.55 "stops" respectively.
They aren't "stops" , they are stops .
These calculations ignore the additional aspect ratio efficiency of 4:3 over 3:2, which is roughly 4%. This changes e.g. the Canon APS-C to FT ratio from 0.55 to 0.49 "stops", and the 135 to FT ratio from 1.94 to 1.89 "stops".
You lost it there.
Do you really think that calls for an adverb like 'considerably'?
Is 1.2 litres considerably more than 0.73 litres? It's not that far from double the amount, Rikke.
Are liters an exponential measure, Boggis? Stops are:

exp₂(1.2) = 2.3
exp₂(0.73) = 1.7

2.3/1.7 = 1.35, not even close to double the amount, Boggis.

1.2 stops are 15% more than 1 stop.
0.73 stops are 15% less than 1 stop.

We are talking deviations of less than 1/6. For all practical purposes insignificant. How accurate do you think the labeling of your aperture and shutter speed settings are?

--
Rikke
 
If you are talking about sensors, not images, then IMO the number of pixels has nothing to do with it at all.
As I said, I'm always talking about the final photo.
If you're talking about the photos (which I always am), then it's not how an individual pixel performs, but how the pixels perform in aggregate. For example, we wouldn't compare a single 2x2 pixel to a single 1x1 pixel, but rather four 1x1 pixels.
The pixels don't 'perform in aggregate'. You process the data. Then you're comparing the results of one process with the results of another. So for example if you reduce the resolution of a 20 Mp image to 10 Mp to compare with something else, your results will depend a great deal on how you reduced that resolution.
Well, I guess this is where the diversion into semantics begins. How about this: you don't compare (in terms of the final photo) a single 2x2 pixel to a single 1x1 pixel -- you compare to four 1x1 pixels.
No, it's not a problem of semantics but understanding what exactly we are talking about. I agree with you regarding what you compare, but what I'm saying is that the processing makes a difference to the results. For example, if I reduce those 20Mp to 10 Mp by simply throwing away 1 out of every 2 pixels, the result I get will be very different to that achieved by averaging 2 pixels to 1.

Again, coming back to DR, I have no problem personally with the DxO definition as an engineer. I don't find it particularly useful though as a photographer - I would much prefer to know how many stops of highlight headroom/shadow footroom I have.

Using the noise floor as the lower limit is OK for DxO's purpose but I think that the limit for a photographer is set by a much higher SNR (my feeling is that the SNR=1 limit is too low also) and the visual characteristics of the image noise.This means that photon shot noise plays a part in DR.
But it is not because the pixels are larger per se, but rather because Canon does not yet have the capability to make the tech with smaller pixels. However, as I said, I think it has more to do with maintaining the high frame rate than the inability to make smaller pixels.
I've no idea whether Canon has the capability or not but this is not the reason they reduced the pixel count of the G11 cf. the G10.
Well, if you have no idea, then you have no idea. But I can tell you, for a fact , that more pixels for a given sensor size and efficiency results in more IQ all the way around (although this is subject to diminishing returns, of course). The only question is if the pixels can be made smaller without adversely affecting efficiency. However, the overall trend is that pixels have been getting smaller and more efficient. Of course, that's not to say that when a new technology comes out, that it might not have to begin with larger pixels.
I said I have no idea whether or not Canon has the capability to make the tech with smaller pixels. Do you have a mathematical proof for this? I would be interested if you do.
 
But it is not because the pixels are larger per se, but rather because Canon does not yet have the capability to make the tech with smaller pixels. However, as I said, I think it has more to do with maintaining the high frame rate than the inability to make smaller pixels.
I've no idea whether Canon has the capability or not but this is not the reason they reduced the pixel count of the G11 cf. the G10.
...but maybe it is. The G10 is awfully slow in focus and the shutter lag is a real killer. As far as I know, the G11 and 12 are much faster... of course, they are no DSLRs, but what people say is that they are considerably faster and I have no reason to doubt that information since 50% less pixel data plus added internal processing capacity due to technical evolution makes a big difference. Otherwise the G10 pixels are indeed very nice and the camera is fantastic.
What you say sounds feasible but when the G10 was launched, one assumes that Canon thought that it was good enough (and very many users did).

Canon's press release (at the time) of the G11 states the following

" Professional photographers will benefit from the G11’s greatly expanded dynamic range. Canon’s new Dual Anti-Noise System combines a high sensitivity 10.0 Megapixel image sensor with Canon’s enhanced DIGIC 4 image processing technology to increase image quality and greatly improve noise performance by up to 2 stops (compared to PowerShot G10). The PowerShot G11 also includes i-Contrast technology, which prevents high-light blowout whilst retaining low-light detail – ideal for difficult lighting situations. "

So whatever the real reason was, Canon made quite a few claims about the G11 compared to the G10 - 'greatly expanded dynamic range' - '2 stop noise performance improvement'
 
How do things look from your high horse?

--
alatchinphotography.com

“Imagination is more important than knowledge. For
knowledge is limited to all we now know and
understand, while imagination embraces the entire
world, and all there ever will be to know and
understand.” - Albert Einstein
 
Well, I guess this is where the diversion into semantics begins. How about this: you don't compare (in terms of the final photo) a single 2x2 pixel to a single 1x1 pixel -- you compare to four 1x1 pixels.
No, it's not a problem of semantics but understanding what exactly we are talking about. I agree with you regarding what you compare, but what I'm saying is that the processing makes a difference to the results. For example, if I reduce those 20Mp to 10 Mp by simply throwing away 1 out of every 2 pixels, the result I get will be very different to that achieved by averaging 2 pixels to 1.
For the record, downsampling is a horrible way to compare the IQ potential of two systems -- the proper method is upsampling one, the other, or both, to the same display dimension.

However, if the purpose of comparison is web display, then, of course, downsampling is the proper method of comparison, since the final photo is necessarily downsampled.

And, yes, the method of resampling is key. Using a "nearest neighbor" downsampling method is a bad choice, as is using an upsampling method that merely scales, rather than interpolates.

In any event, the bottom line is that a pixel-for-pixel comparison is only valid if both photos are made from the same number of pixels. If the photos have different numbers of pixels, then resampling must be done, since we are naturally comparing the two photos on the basis of same display size, and that resampling must be done regardless, whether it is for print (not necessarily under user control -- the printer uses its own resampling algorithms) or for web display.
Again, coming back to DR, I have no problem personally with the DxO definition as an engineer. I don't find it particularly useful though as a photographer - I would much prefer to know how many stops of highlight headroom/shadow footroom I have.
That's what the DxOMark definition of DR tells you (as do the rest) -- the number of stops from the noise floor to the saturation limit.

How much of that range is in the shadow range, and how much is in the highlight range, depends on how the photographer exposes the photo (or, more usually, how the camera exposes the photo, since most let the camera choose the exposure for them in AE modes).
Using the noise floor as the lower limit is OK for DxO's purpose but I think that the limit for a photographer is set by a much higher SNR (my feeling is that the SNR=1 limit is too low also) and the visual characteristics of the image noise.This means that photon shot noise plays a part in DR.
DxOMark uses the 100% NSR for the noise floor (DR100), which, by definition, includes photon noise. If you wish to use a different noise floor, that's entirely your preogitive, but, like DxOMark, you need to clearly spell out what noise floor you are using. In addition, DxOMark gives a "screen DR" (DR / pixel) and a "print DR" (DR / pixel of photo resampled to 8 MP).
Well, if you have no idea, then you have no idea. But I can tell you, for a fact , that more pixels for a given sensor size and efficiency results in more IQ all the way around (although this is subject to diminishing returns, of course). The only question is if the pixels can be made smaller without adversely affecting efficiency. However, the overall trend is that pixels have been getting smaller and more efficient. Of course, that's not to say that when a new technology comes out, that it might not have to begin with larger pixels.
I said I have no idea whether or not Canon has the capability to make the tech with smaller pixels. Do you have a mathematical proof for this? I would be interested if you do.
I can mathematically prove that smaller pixels do not result in less DR / pixel for equally efficient sensors (but greater DR / area), but not mathematically prove that smaller pixels can be made with the same efficiency. Please let me know if you want me to do so -- I'll be happy to oblige.

But I cannot mathematically prove that smaller pixels can be made as efficient as larger pixels. However, I can cite evidence, that, as a general rule , as a function of time, pixels have gotten smaller and more efficient.

However, I note in your reply above that you agree with Dr. Martinec's LL post:

http://www.luminous-landscape.com/forum/index.php?topic=42158.0

Excellent! It is basically a discussion of what DR measure you wish to use (DR100, DR50, DR25, etc.). However, as I said, more useful still, in terms of the visual properties of the final photos, is to compute the DR / area rather than DR / pixel.
 
You should ask Joe if that degree of "roughness" is acceptable.
The difference between systems in terms of light collecting ability for lenses using the same f-ratio and photos displayed with the same area is quite simple:
  • 4/3 vs 1.6x: log2 (332/225) = 0.56 stops more light on the 1.6x sensor.
  • 4/3 vs 1.5x: log2 (372/225) = 0.73 stops more light on the 1.5x sensor.
  • 4/3 vs FF: log2 (864/225) = 1.94 stops more light on the FF sensor.
Likewise, we can repeat for the systems displayed at 4:3 (or more square):
  • 4/3 vs 1.6x: 2 x log2 (14.9 / 13) = 0.39 stops more light on the 1.6x photo.
  • 4/3 vs 1.5x: 2 x log2 (15.7 / 13) = 0.54 stops more light on the 1.5x photo.
  • 4/3 vs FF: 2 x log2 (24 / 13) = 1.77 stops more light on the FF photo.
and for the systems displayed at 3:2 (or wider):
  • 4/3 vs 1.6x: 2 x log2 (22.3 / 17.3) = 0.73 stops more light on the 1.6x photo.
  • 4/3 vs 1.5x: 2 x log2 (23.7 / 17.3) = 0.91 stops more light on the 1.5x photo.
  • 4/3 vs FF: 2 x log2 (36 / 17.3) = 2.11 stops more light on the FF photo.
I define "trivial" as any difference less than 1/3 of a stop (0.33 stops), so none of the differences are "trivial", but the difference between 4:3 and 3:2 most certainly has a trivial effect.

But, let's put this into some persective by comparing the f-ratios of lenses on the respective formats that would result in the same light on the sensor for the same shutter speed and the same DOF for the same perspective and framing (equivalent settings):
  • f/2 on 4/3 --> f/2.5 on 1.6x
  • f/2 on 4/3 --> f/2.7 on 1.5x
  • f/2 on 4/3 --> f/4 on FF
Is the difference "significant"? Well, that's for each individual to decide for themselves.
 
Well, I guess this is where the diversion into semantics begins. How about this: you don't compare (in terms of the final photo) a single 2x2 pixel to a single 1x1 pixel -- you compare to four 1x1 pixels.
SNIP
For the record, downsampling is a horrible way to compare the IQ potential of two systems -- the proper method is upsampling one, the other, or both, to the same display dimension.

However, if the purpose of comparison is web display, then, of course, downsampling is the proper method of comparison, since the final photo is necessarily downsampled.

And, yes, the method of resampling is key. Using a "nearest neighbor" downsampling method is a bad choice, as is using an upsampling method that merely scales, rather than interpolates.
You could argue that nearest neighbour retains most true detail of any method. But my point is that if we are discussing images, rather than sensor data, the post capture processing is rather important.

SNIP
DxOMark uses the 100% NSR for the noise floor (DR100), which, by definition, includes photon noise. If you wish to use a different noise floor, that's entirely your preogitive, but, like DxOMark, you need to clearly spell out what noise floor you are using. In addition, DxOMark gives a "screen DR" (DR / pixel) and a "print DR" (DR / pixel of photo resampled to 8 MP).
The point is, to me, as a photographer, it isn't interesting.
Well, if you have no idea, then you have no idea. But I can tell you, for a fact , that more pixels for a given sensor size and efficiency results in more IQ all the way around (although this is subject to diminishing returns, of course). The only question is if the pixels can be made smaller without adversely affecting efficiency. However, the overall trend is that pixels have been getting smaller and more efficient. Of course, that's not to say that when a new technology comes out, that it might not have to begin with larger pixels.
I said I have no idea whether or not Canon has the capability to make the tech with smaller pixels. Do you have a mathematical proof for this? I would be interested if you do.
I can mathematically prove that smaller pixels do not result in less DR / pixel for equally efficient sensors (but greater DR / area), but not mathematically prove that smaller pixels can be made with the same efficiency. Please let me know if you want me to do so -- I'll be happy to oblige.

But I cannot mathematically prove that smaller pixels can be made as efficient as larger pixels. However, I can cite evidence, that, as a general rule , as a function of time, pixels have gotten smaller and more efficient.
What I'm interested in is mathematical proof of your statement above (academic interest only).

".. I can tell you, for a fact/, that more pixels for a given sensor size and efficiency results in more IQ all the way around."/
However, I note in your reply above that you agree with Dr. Martinec's LL post:

http://www.luminous-landscape.com/forum/index.php?topic=42158.0

Excellent! It is basically a discussion of what DR measure you wish to use (DR100, DR50, DR25, etc.). However, as I said, more useful still, in terms of the visual properties of the final photos, is to compute the DR / area rather than DR / pixel.
Yes, I certainly do. But as I stated above, DR to a photographer is not the engineering DR of the sensor since it is affected by other things, not least of which is what the photographer finds acceptable. Martinec states;

"A final caveat: The useful DR to a photographer can be limited by more than indicated by the S/N figure of merit; for instance Canon DSLR's have a base ISO plagued by a lot of pattern noise in shadows, which can be visually much more objectionable than the random grain of unpatterned noise while not showing up in the noise standard deviation. On the other hand, pattern noise seems very well controlled on the D3x. The pattern noise can limit the useful DR -- how much one is willing to push shadows -- more than might be indicated by the S/N graphs."

I guess we've hashed this to death. :)
 
Nice demo on the weirdest caravan I have ever seen John. (ed Jeff)
Folds flat in about 30 seconds. The uncropped photo was from
a series showing all the cool spots we stayed for little or no money. I just
grabbed it because the was is a shadow which (even with the limed DR of the

E-3) can be lifted. And even if it isn't up to Joe's standards, there is a surprising
amount of info. in the shadows.
Do you own all three cameras ?

The D7000 is just a revelation: I almost got a d90 but being an OM owner from before, an E450 with three lenses suited my budget.

So I would love to have a d7000 and a very complementary mFT camera.
No, just FT and mFT stuff. I copied the sample from Joe's link.
I actually find, like days of film, I enjoy the challenge of the limitations of my gear and the use of tripods, flash and reflectors to make up for DR and then post proc' to I stretch or if you must, do HDR, then do it manually.

I think this thread has been worth reading and contributing to, although it started a little off-oly, it has its contetx now in place.
I frequently add fill light - sometimes do HDR, but it always seems to be to save a photo that wouldn't be that good anyway. I find the shots I like usually required some effort to get good natural ligh (or be lucky).

For me, lens size more important the DR or high ISO. I haven't found a focal length I don't like. With mFT I now travel with a couple of fast primes, std zoom, UWA & fisheye, and a telephoto. All fits in the corner of a carry on bag. If I drive, I take an E-5 and fast telephotos. Then I just work around the sensor I have.

I think all current DSLRs give great results, some have wider range, but in my experience that will just allow me to get better results in lousy conditions. I would rather have more lens options.

--
Jeff Taylor
http://www.pbase.com/jltaylor
 
The first thread you linked to was about which brand of camera was better to ward off wild animals, but the post you linked to was basically OT for the thread.
--
Art P
"I am a creature of contrast,
of light and shadow.
I live where the two play together,
I thrive on the conflict"
 
Ha! I've been harshley chastised for saying that they NEEDED to differentiate the E-5 by putting in an unusually weak aa filter in order to justify it's release nevertheless it's price, at least for those that fell for the bait or just had money to blow anyway. . .nothing wrong with that since I probably would have bought one if money were something I didn't mind throwing around but I nor many others cant ignore the fact that other than sharpness there is NOTHING to be gained. . .

Remember when people and reviewers like dpr mentioned the softness of the e3,420/520 even though they still admitted the files sharpened up well yet how poorly people here reacted? Remember how people were upset that they were using the l10 instead of the e-3 for lens tests even though they were for the benefit of the optics!!!

This place can be sooo biased and baffling even within their own system that its crazy!
--
Oldschool Evolt shooter
 
The first thread you linked to was about which brand of camera was better to ward off wild animals, but the post you linked to was basically OT for the thread.
...I never read the thread. I was just emailed a link to a DR comparison between the 5D2 and D7000, and have that post bookmarked.
 
boggis the cat wrote:

The difference between systems in terms of light collecting ability for lenses using the same f-ratio and photos displayed with the same area is quite simple:
  • 4/3 vs 1.6x: log2 (332/225) = 0.56 stops more light on the 1.6x sensor.
  • 4/3 vs 1.5x: log2 (372/225) = 0.73 stops more light on the 1.5x sensor.
  • 4/3 vs FF: log2 (864/225) = 1.94 stops more light on the FF sensor.
Not quite, because of the efficiency of the aspect ratio.

These calculations ignore the additional aspect ratio efficiency of 4:3 over 3:2, which is roughly 4%. This changes e.g. the Canon APS-C to FT ratio from 0.55 to 0.49 "stops", and the 135 to FT ratio from 1.94 to 1.89 "stops".

Remember that the aperture is the diameter of the image circle, so the closer to 1:1 the aspect ratio is the more of the light from the image circle is being used (efficiency).

This ends up being 4:3 having 4% greater efficiency than 3:2.

Actual calculations for efficiency of the aspect ratios being considered:
  • 4:3 aspect ratio: efficiency = 12 / (25/4) x pi = 0.6112
  • 3:2 aspect ratio: efficiency = 6 / (13/4) x pi = 0.5876
So we get a 4% increase in efficiency from the 4:3 aspect ratio, and this must be factored in to "equivalence" to make the starting conditions correct.

Revising your calculations:
  • 4/3 vs 1.6x: log2 (332/ (225 x 1.04) ) = 0.5 stops more light on the 1.6x sensor.
  • 4/3 vs 1.5x: log2 (372/ (225 x 1.04) ) = 0.67 stops more light on the 1.5x sensor.
  • 4/3 vs FF: log2 (864/ (225 x 1.04) ) = 1.88 stops more light on the FF sensor.
This actually makes comparing FT to the two APS-C variants easier: compared to Canon, FT is down a 1/2 "stop"; compared to 1.5x, FT is down 2/3 "stop".
Likewise, we can repeat for the systems displayed at 4:3 (or more square):
  • 4/3 vs 1.6x: 2 x log2 (14.9 / 13) = 0.39 stops more light on the 1.6x photo.
  • 4/3 vs 1.5x: 2 x log2 (15.7 / 13) = 0.54 stops more light on the 1.5x photo.
  • 4/3 vs FF: 2 x log2 (24 / 13) = 1.77 stops more light on the FF photo.
and for the systems displayed at 3:2 (or wider):
  • 4/3 vs 1.6x: 2 x log2 (22.3 / 17.3) = 0.73 stops more light on the 1.6x photo.
  • 4/3 vs 1.5x: 2 x log2 (23.7 / 17.3) = 0.91 stops more light on the 1.5x photo.
  • 4/3 vs FF: 2 x log2 (36 / 17.3) = 2.11 stops more light on the FF photo.
This is all for cropping , and is not what I am pointing out.

If you want the correct differences you have to take account of the "conversion efficiency" of the sensor aspect ratio with respect to the image circle.
I define "trivial" as any difference less than 1/3 of a stop (0.33 stops), so none of the differences are "trivial", but the difference between 4:3 and 3:2 most certainly has a trivial effect.

But, let's put this into some persective by comparing the f-ratios of lenses on the respective formats that would result in the same light on the sensor for the same shutter speed and the same DOF for the same perspective and framing (equivalent settings):
  • f/2 on 4/3 --> f/2.5 on 1.6x
  • f/2 on 4/3 --> f/2.7 on 1.5x
  • f/2 on 4/3 --> f/4 on FF
These are all slightly out, too.
Is the difference "significant"? Well, that's for each individual to decide for themselves.
My point is simply that if you are going to calculate something out accurately you should correct for the aspect ratio efficiency as well.
 
From 24x36 to 15.8x23.6 is about 1.2 stops.
From 15.8x23.6 to 13x17.3 is about 0.73 stops.
That's for "1.5x" APS-C.
That's for Nikon DX, yes. The best APS-C cameras available at the moment.
Are there other variants of "APS-C"?
Canon APS-C is 14.8 x 22.2 mm, or 1.4 and 0.55 "stops" respectively.
They aren't "stops" , they are stops .
We are discussing the relative efficiencies of sensor area. This is not measured in exposure terms.

When we use "stops" in this case we mean: if we increased the exposure of the less efficient sensor by x stops, then we would obtain the same noise characteristics (assuming everything else equal etc).
These calculations ignore the additional aspect ratio efficiency of 4:3 over 3:2, which is roughly 4%. This changes e.g. the Canon APS-C to FT ratio from 0.55 to 0.49 "stops", and the 135 to FT ratio from 1.94 to 1.89 "stops".
You lost it there.
No, I didn't.

Refer to my reply to Joe down-thread, where I lay out the calculations for the "conversion efficiency" of the two different aspect ratios.

In the case of FourThirds compared to APS-C we get:
  • FT is 0.5 (1/2) "stop" less efficient than Canon APS-C (1.6x)
  • FT is 0.67 (2/3) "stop" less efficient than APS-C (1.5x)
  • FT is 1.88 "stops" less efficient than 135 ("Full Frame").
Do you really think that calls for an adverb like 'considerably'?
Is 1.2 litres considerably more than 0.73 litres? It's not that far from double the amount, Rikke.
Are liters an exponential measure, Boggis?
The exponential nature is irrelevant. One stop appears to double or halve brightness.

It is the effect that is important.

The correct measures are:
  • From 24 × 36 to 15.8 × 23.6 is about 1.2 "stops"
  • From 15.8 × 23.6 to (13 × 17.3) × 1.04 is 0.67 "stops".
If you have 1.2 litres of water (or anything else) you have nearly double 0.67 litres. An exposure increase of 1.2 stops is nearly double an exposure increase of 0.67 stop.

Or, APS-C (1.5×) is twice as close to FourThirds in efficiency as it is to 135. APS-C (1.6×) -- at 1.4 and 0.49 "stops" -- is three times closer to FT as it is to 135.
Stops are:

exp₂(1.2) = 2.3
exp₂(0.73) = 1.7

2.3/1.7 = 1.35, not even close to double the amount, Boggis.

1.2 stops are 15% more than 1 stop.
0.73 stops are 15% less than 1 stop.

We are talking deviations of less than 1/6. For all practical purposes insignificant. How accurate do you think the labeling of your aperture and shutter speed settings are?
You are (deliberately?) ignoring the practicality of the measure.

The significance of the math must be considered, and you can't jump from an exponential to linear method and claim the unused linear result is significant.
 

Keyboard shortcuts

Back
Top