Detail Man: Richard Butler and Rishi Sanyal:
These two articles about sources of image-noise are well presented and explained in your statements of clarification made in comments sections.
I like the way that DPReview writers appear to have the adopted the terminology "ISO-invariant" / "ISO-variant" in lieu of the (IMO, less descriptive) terms "ISO-less" / "ISO-full", and recall suggesting the alternate use of those particular terms in this post replying to "gollywop":
... when he was in the course of editing his to be published DPReview article here:
... and which he continued to use in his subsequent DPReview published article here:
The more information regarding these subjects that is made accessible to readers within articles published on the DPReview site (in addition to appearing in forum posts), the better !
(continued) The term ISOless, was rather phrased in the sense 'you don't have to use the ISO control', and the speculative ISOless camera was a camera without an ISO control and a different style of UI based on explicit control of exposure. The application of 'ISOless' to existing cameras came from an observation that the D7000 was usable in an ISOless way (meaning 'ISOless was a methodology of exposure management, not a characteristic of a camera) So maybe the real term should be 'ISOless capable camera', signifying that point.
As one of the people in at the beginning of the term 'ISO-less' I agree it's a bit clumsy. But I'd say that ISO-Invariant, as well as being equally clumsy is also fundamentally wrong. The question is, what is 'invariant', and it isn't ISO, since whatever you do, your final photo will have an 'ISO' (i.e. a relationship between exposure and 'density' or its digital analogue). In fact, in an 'ISOless' camera, what is invariant is the analogue gain, and the term 'ISO invariant' is really applying the confusion between ISO and gain which leads to many of the misunderstandings about ISO.
dagobah: This is a nice article, thanks. We just had a discussion on this topic recently on the Pentax forum.
Sensor noise has definitely come down in the last few years -- I remember my Pentax K10D not being usable above ~ISO500. Whereas I am happy with the K-3 at ISO3200. That had to have been from the improvement of the electronic noise, no? Are there no more gains to be had on sensors of a specific size? If you're below the noise floor of shot noise, no improvement will be noticeable, and sensors wouldn't vary much in shadow noise performance.
What I've been thinking would be very useful when showing sensor performance measurements (such as DxOmark does) would be to have a curve for the sensor in question and a curve for and "ideal photon detector" of the same sensor size.
Another comment is that I am interested in ETTR -- could you maybe do a little practical guide addendum to this article at some point? Cheers.
I'm glad to hear of your unbelief in anything quantum. Presumably you abstain form using nuclear power, even from natural sources, like the sun.
I think it's something slightly different. In the early days, CCDs had a higher intrinsic quantum efficiency that CMOS sensors. To compensate this, the earlier CMOS sensors (particularly the Canons and the D2X) used somewhat more transmissive and therefore less colour selective colour filter arrays than the contemporaneous CCD, thus establishing the tradition that CCD had purer colour. Now CMOS has reached and exceeded the intrinsic QE of CCD, but rather than restore the more selective CFA, the manufactures have gone after sensitivity and good low light performance, which they think is a bigger seller than pure colour. There have been cameras bucking this trend, for instance the D3X. The Canon 5Ds also is touted as having a more selective CFA, so might have more CCD-like colour.
VidJa: does anyone know where we are in efficiency of the sensors? with other words, how many of the available photons do we measure with current sensors and how far can we expect to improve?
To martindpr (1 hour ago)Nikon D2X 476%, mhm, yeah right... If 100% is max efficiency, then D2X must be inventing more of it according to the source you posted.There are some anomalies in that data, down to failure of the function fitting methods. Different methods produce different anomalies. 476% is clearly wrong, it doesn't make them all wrong.
I'm interested by you comment on the lab test, p8. - '*We originally shot the scene with the Sigma 50mm f/1.4 Art as part of an attempt to introduce that lens as the standard across systems, but we ran into an issue whereby images from the D5500 were coming out noticeably noisier than the D5300. We found that at f/5.6 the native Nikon 50mm was giving consistent exposures shot-to-shot, while the Sigma was underexposing and yielding somewhat inconsistent exposures. Further testing is required and we'll keep looking into it.'.
Does that mean that the Sigma is not actually setting the f-stop that has been set?
You're still carrying your 'belief' that this camera has a Sony sensor. It's pretty clear now that it's a Toshiba, like the D7100. Time to update, I think...
AlexBasile: SENSOR (SONY) d5300 vs SENSOR (TOSHIBA) d7100 here :
photos 1,2,3,4 > NIKON D5300photos 5,6,7,8 > NIKON D7100NIKON D7100 problem with "BANDING"...NIKON D5300... NO PROBLEM (also in DEEP DARKNESS).
See my response above. From the product shot with the lens off, I'd say this is the Toshiba sensor, not the Sony.
From the product shot with the lens off, the sensor looks like the Toshiba again, not the Sony. The Sony has a very distinctive pattern of lands top and bottom and has a blue optical mask frame, while the Toshiba has a regular pattern of lands and a black mask frame. Maybe if you have a sample, it's worth looking into the lens throat and having a close look. The pixel count difference might not be anything at all except a marketing slip For instance, both the D7100 and D7200 list the maximum file size as 6000x4000 pixels.
'EI is essentially the same as ISO.'Rather, EI is the right term and 'ISO' is the wrong one. 'ISO' is an organisation with lots of standards, not all about photography. The Exposure Index (As defined in ISO 12232:2006), is the ISO standardised index we use to set exposure, rather than any other ISO standard. It would be pretty silly, for instance to use ISO 9, the standard for translation of Latin to Cyrillic characters, to set exposure.
noirdesir: Multiple sources have reported over the years that Nikon uses non-Sony 24 APS-C sensor, from both their own sensor group as well as Toshiba. I have repeatedly heard that the D3300 sensor is a Nikon sensor and the D7100 a Toshiba sensor. Thom Hogan recently mentioned that the D5300 sensor is also from Toshiba. When the D7100 came out, it was remarked upon that it a tiny bit more banding than the D7000 (which had a Sony EXMOR sensor), suggesting that Sony still had a small edge in that regard.
The only other brand using a 24 MP APS-C sensor (besides Sony and Nikon) is Pentax. As Thom Hogan also mentioned that Sony hasn't offered its 24 MP APS-C sensor to other companies (or they did not want it, the NEX-7 sensor appeared to be much more picky in regard to the angle of incidence than previous Sony 16 MP sensors), it might also have a Toshiba or other sensor.
I think the interesting thing is that although the pixel count is the same, Nikon haven't carried on using the same sensor. The latest Sony unit is overall a better performer than any of the previous ones. It has a little more read noise than the Toshiba one, but doesn't suffer from the slight pattern noise issue.
Slightly wrong. D3200 sensor was Nikon. D3300 Sony. D5200 and D7100 Toshiba, D5300 Sony. I don't know what is the D5500 yet, but most likely Sony. The Sony, Nikon and Toshiba sensors all look quite different, it's very obvious which is which just by looking at them.One factor which might have counted in Sony's favour with the Nikon engineers is that the D5300/D3300 sensor looks like a complete redesign from the previous Sony 24MP effort, which at a pixel level performed pretty much exactly as their 16MP sensor, so produced overall worse read noise.Pentax 24MP sensors are also Sony. The Semiconductor business is worth more to Sony than the still camera business, I don't think they're shy about selling their sensors to anyone they can.
Slow news day?
photohounds: This discussion obsesses on the fact that you get more (but not excessive) depth of field. Yes it IS an obsession that neatly avoids some facts..Using the 50mm/f2 vs 100mm/f4 example above ..You ALSO get TWO STOPS BETTER shutter speed (4 times as fast). (better sharpness)You get QUADRUPLE the flash range (faster recycle or More range)I once shot a Canon FF PRO user with the 50mm f2. She remarked: "I've never SEEN so much detail"..Further - you also get (mostly) MUCH smaller, lighter lenses and faster zooms..You also get typically 1/2 to 1/3 the VIGNETTING that full Marketing Frame (FMF) lenses can manage. This is excellent if you like to shoot images with the main subject OUTSIDE the centre third of the image but still want it sharp. Generally (with few exceptions) APS-C lenses fall halfway between for good vignetting resistance..Examples .. http://photohounds.smugmug.com/Performing-arts/Eurobeat-by-Supa/
Re: For example, if an mFT photographer shot a scene at 50mm f/2 1/200, there is no reason the FF photographer could not shoot the scene at 100mm f/4 1/100.
I assume you meant
if an mFT photographer shot a scene at 50mm f/2 1/200, there is no reason the FF photographer could not shoot the scene at 100mm f/4 1/200.
and I would add, for clarity, and end up with a very similar looking photograph, particularly with respect to noise and image quality.
'But, while it may look a lot like a D610, the D750 feels rather different, thanks to its magnesium alloy shell. This allows it to be lighter than the 610, which uses a plastic composite body on a magnesium alloy chassis, '
Sorry, that's wrong. The D610 (and I'm guessing D750) don't have a 'magnesium chassis', they have a plastic chassis. The D610 has magnesium top and back covers. I'm thinking the D750 just has a magnesium top, since the back is probably moulded plastic, due to the arrangements for the flippy screen.
(unknown member): Found a very good explanation on why the sample images in this article look that way. As I have mentioned in a previous post, I almost fell for it until I noticed that the different sensor sizes actually have the SAME resolution of 18Mp. This resulted in smaller sensels that contributed to the equivalent SNR for smaller sensors even though they had greater exposure. Had the article used the same sensor (same pixel density) then the full frame will show a much noisier result by virtue of underexposure.
Read about spatial resolution here:
Possibly you're capable of seeing that your source is saying the opposite of what you are. You claim that the light decreases due to the inverse square law as the focal length gets bigger. He says that the light decreases as the focal length gets smaller, due to the reduction in size of the aperture.If you'd bother to read around that source rather than quote mining, you'd realise that Clark is just as wrong as you but in the opposite direction.
(unknown member): Very basic question:
Can this "equivalence" ever be achieved with film? Consider Kodak Ektar 100 in 35mm and 120 formats. If yes, then how?
If it can't be achieved then either:1. film is exempted from basic photographyor2. equivalence-fu is wrong.
@dtmateojr - "Grain is what you call noise in digital." - up to a point, they are both a reflection of haow many photons you've counted. "Does the grain change when you expose 35mm and 8x10 Ektar exactly the same way?" - why this arbitrary restriction on using the same film in the two different formats?
@ dtmatejorIf you'd beother ed to look at the post Joe referred you to, rather than just dismissing it, you'd have seen that the equivalence for film was based on just the same idea as digital. The equivalence depends on the number of photons making up the image. A digital sensor counts those photons directly (more or less). Film cout's the resulting photoelectrons rather differently, a pair of photons renders a grain reducible. To render an image from the same number of photoelectrons for a larger frame needs bigger grains (in fact you could also do it perfectly satisfactorily for the smaller grain film, but you'd need to print on a higher contrast paper to compensate for the low maximum density that using the small grain would cause.
mostlyboringphotog: Finally, I grok it so that now I see why mFT aficionados would care more about the "Total Light" equivalency more so than any other format owners.BLUF: I have learned thing or two and agree that "Total Light" comparison is a fairer than same FOV, same brightness comparison.The equivalence in the image (GB criteria) comes from the equivalent setting (Sven) but I say it's the combination of the settings that is giving you the equivalence. The individual parameters are not equivalent to each other without the others set as well. Therefore, it's misleading to say any one parameter is equivalent to another one.A good lens should be a good lens and not a good lens equivalent.No more animosity even to que and pete's as I have groked...
@Androole re:"Of course, in these examples the smaller sensors will actually tend to be slightly better in terms of real-world signal-to-noise ratio. 1" is not a full stop behind M4/3, M4/3 is not a full stop behind APS-C, and APS-C is not a full stop behind FF. An EM-5 is only about 1.5 stops behind a Sony A7 in terms of ISO performance, despite having a 4x smaller sensor, for instance."
There's a reason for that, the smaller sensors are where the real competition is, so they tend to be made with the newest processes and tricks such as BSI and light pipes, while the bigger sensor have been relegayed to old production lines which can't be used for small sensors any more. Thus small sensors tend to have higher efficiency than large ones. Maybe they are about four years ahead in sensor efficiency.
Roland Karlsson: Nice try to clear up some misunderstandings. Unfortunately it does not always help, as this thread shows.
The main irritating thing (for me) is that I do not have any problems with this whatsoever - still all the confusions pops up now and then. This seems to be hard stuff for many people - for some strange reason.
When equivalence might be useful:1. When someone who has developed their technique on one format is thinking of moving to another, it gives a guide as to which settings to use to achieve the results that they are used to.2. When someone uses different cameras (maybe a smaller lighter system for travel) to know how to set the controls to get the results that they are used to.3. When someone is trying to emulate an effect they have seen, taken using a different sensor size to theirs. They can see the settings used for that shot and know which settings to use on their camera to get the same effect.re: "It is deliberately made more difficult to understand by introducing irrelevant concepts. "I think the reverse is true. There is a small group trying to deliberately obfuscate because the message of equivalence conflicts with their own personal set of 'truths' which are often to do with the perceived superiority of their own equipment choices.