Sources of read noise in CMOS sensors

There are flat-top areas in the GretagMcBeth chart of these images. If tried some Octave experiment to them (downsampled all images to reasonable 8 MP before with good old Photoshop). But it's not just the standard-deviation, one has also take into account how fine or rough the gain is. Don't know, how to convert that subjective look-and-feel into a number.
If you're just interested in read noise, light is a confounding variable. Get rid of it. If you're interested in the frequency distribution of the read noise, take the FFT.
Here's some code to calculate and display the FFT. It's in Matlab, but you're using Octave, so you should be able to adapt it.

classdef RawSpectrumAnalyzer
% RawSpectrumAnalyzer - Computes average horizontal and vertical power spectra
% from a raw image file (e.g., .dng) using rawread.

properties
fileName = '_DSF0027.dng' % Path to the raw image file (.dng, .cr2, etc.)
planeNumber = 1 % Raw plane index to extract
rawImage % Raw image data
imageSize % [rows, cols] of the selected plane
blackLevel = 64;
upperYlim = -10;
end

methods
function obj = RawSpectrumAnalyzer(fileName, planeNumber)
% Constructor
if nargin > 0
obj.fileName = fileName;
end
if nargin > 1
obj.planeNumber = planeNumber;
end

obj = obj.loadRawImage();
end

function obj = loadRawImage(obj)
% Load raw image and extract the specified plane
raw = rawread(obj.fileName);
raw = raw - obj.blackLevel;

if ndims(raw) == 2
img = raw;
elseif ndims(raw) == 3
if obj.planeNumber > size(raw, 3)
error('planeNumber exceeds number of planes in raw image.');
end
img = raw(:, :, obj.planeNumber);
else
error('Unsupported raw image format.');
end

obj.rawImage = double(img);
obj.imageSize = size(img);
end

function [horizontalSpectrum, verticalSpectrum] = computeSpectra(obj)
% Compute average horizontal and vertical spectra
img = obj.rawImage;
[rows, cols] = size(img);
obj.imageSize(1) = rows;
obj.imageSize(2) = cols;

horizontalSpectrum = zeros(1, cols);
for r = 1:rows
F = fft(img(r, :));
P = abs(F).^2;
horizontalSpectrum = horizontalSpectrum + P;
end
horizontalSpectrum = horizontalSpectrum / rows;

verticalSpectrum = zeros(1, rows);
for c = 1:cols
F = fft(img(:, c));
P = abs(F).^2;
verticalSpectrum = verticalSpectrum + P.';
end
verticalSpectrum = verticalSpectrum / cols;
end

function plotSpectra(obj)
[horz, vert] = obj.computeSpectra();
rows = obj.imageSize(1);
cols = obj.imageSize(2);

horz = horz(1:floor(cols/2));
vert = vert(1:floor(rows/2));

horz = log2(max(horz, 1e-12) / max(horz));
vert = log2(max(vert, 1e-12) / max(vert));

f_h = linspace(0, 0.5, numel(horz));
f_v = linspace(0, 0.5, numel(vert));

figure;

subplot(2,1,1);
plot(f_h, horz);
title('Average Horizontal Spectrum');
xlabel('Spatial Frequency (cycles/pixel)');
ylabel('Relative Power (stops)');
drawnow; % Force plot rendering to get Y limits
yl = ylim;
ylim([yl(1), obj.upperYlim]);

subplot(2,1,2);
plot(f_v, vert);
title('Average Vertical Spectrum');
xlabel('Spatial Frequency (cycles/pixel)');
ylabel('Relative Power (stops)');
drawnow;
yl = ylim;
ylim([yl(1), obj.upperYlim]);
end
end
end
 
As shown earlier the performance of the two pixels is comparable under the same Exposure in spite of their and their format's size difference.
I'm not sure about the last sentence. The Phase One IQ4 also has small pixels, but looks horrible at ISO25k in some parts of the image. It's definitely not optimized for that, I know.
Pixel size and read noise of the IQ4 150 look similar to 7RM4, other than those dips, which look like some sort of pre-processing.

From photostophotons.com. Dips suggest pre-processing.
From photostophotons.com. Dips suggest pre-processing.

According to Eric Fossum values less than 1 e- should be treated with suspicion. Is it the same Sony pixel configuration?
It doesn't, but there is again a potential misunderstanding. The total number of photons captured only depends on exposure time and f-number all other physical variables equal, ISO doesn't enter that equation. So if one wants to set things up equivalently one has to ensure that Luminance and shutter speed are the same, and M43's f-number is about half of FF's*, regardless of ISO.

The selection of the appropriate ISO value rests on factors independent of Exposure.

Jack

* For the purposes of noise measurement off a flat target one could just as well compare images with the same f-number and double the exposure time, which would provide the same Exposures as above.
Maybe I didn't get your point. You think because of the OM-3 has 1/4000th at ISO12k and the A7RM4 has 1/5000th at ISO50k is an evidence that the OM-3's indicated ISO-value is simply "to optimistic"?
If the pixel size, lighting and f-number are similar, then they roughly saw the same Exposure (regardless of ISO, which is an independent variable: the number of incoming photons is determined by parameters that do not involve the sensor). Images captured by different formats with the same Exposure are not 'equivalent'.
If you look at the shutter speeds for different ISOs at the OM-3, you notice, that ISO12k, 25k and 50k have the same shutter speed.
Maybe they used a ND-Filter or they have dimmed down the lighting a bit. For A7RM4 they indicate 1/5000th at ISO12k, 25k, 50k and 100k. I have none of both cameras. But I would assume, that the indicated ISO values are roughly correct to about 1/3 of a stop. 20 years ago that was a different problem.

Thankfully we have the same effective focal range and therefor roughly the same distance between the lens and the subject for A7RM4 and OM-3. I'm totally sure, that the amount of light getting to the sensor is strictly correlated to the exposure time. I'm quite sure, that the second parameter is the area of the entrance pupil of the lens. (=pi/4*(focal_length/F_Stop)^2).
Ah, I see where the fallacy might come in, one focal length too many. The camera equation is in fact

Hi = Exposure, L = Luminance, t = Exposure Time, Nw = working f-number
Hi = Exposure, L = Luminance, t = Exposure Time, Nw = working f-number

You can find its derivation here: https://www.strollswithmydog.com/camera-equation-angles/
I imaging that entrance pupil as a black hole, that directs all the subject light hitting it, onto the sensor (minus some losses). That area of an 85 mm F5.6 is four times larger than the 42.5 mm F5.6 on the OM-3. So for same shutter speed and same F-stop, the A7RM4 sensor should receive four times as much light. With the lighting of the test chart being variable, I can just rely on the ISO value as roughly equal between different manufacturers. (I know, that some have more and some have less highlight headroom.)

There are flat-top areas in the GretagMcBeth chart of these images. If tried some Octave experiment to them (downsampled all images to reasonable 8 MP before with good old Photoshop). But it's not just the standard-deviation, one has also take into account how fine or rough the gain is. Don't know, how to convert that subjective look-and-feel into a number.
There are lots of uniform patches in the Studio Scene which, though not ideal, will give you results not too far out of the ballpark, as shown earlier. With your own target, try to use one with minimal rugosity and take a slightly out of focus image. Or better yet for random noise, take two of them and subtract them, then compensate for the additional noise by dividing the standard deviation by √2

Jack
 
Last edited:
Is there anybody here who knows what the latest generation of sensors look like BEFORE one subtracts FPN? I'm wondering what FPN cancelling really does to an image. When I was trying to solve banding issues on my old Nikon D4 I realized how the "Raw" really isn't raw, it's rewritten with info from the dark edge pixels, and of course in many cases this means that the low order bits are mush.
 
Is there anybody here who knows what the latest generation of sensors look like BEFORE one subtracts FPN? I'm wondering what FPN cancelling really does to an image. When I was trying to solve banding issues on my old Nikon D4 I realized how the "Raw" really isn't raw, it's rewritten with info from the dark edge pixels, and of course in many cases this means that the low order bits are mush.
Are you asking what the captures look like before correlated double sampling and gain mapping? I don’t think that is possible without special equipment and inside knowledge. However, i don’t see why those things are destructive to image quality.
 
Is there anybody here who knows what the latest generation of sensors look like BEFORE one subtracts FPN? I'm wondering what FPN cancelling really does to an image. When I was trying to solve banding issues on my old Nikon D4 I realized how the "Raw" really isn't raw, it's rewritten with info from the dark edge pixels, and of course in many cases this means that the low order bits are mush.
Are you asking what the captures look like before correlated double sampling and gain mapping? I don’t think that is possible without special equipment and inside knowledge. However, i don’t see why those things are destructive to image quality.
Did you Jim study Pentax K3 III monochrome ? any links if you did? and what you think of it?

I did not find an answer to this question: aside from better light gathering and reduced noise at a chosen ISO by not including the CFA is there any more benefits to dedicated monochrome sensor for B&W ?

--
https://www.flickr.com/photos/151760793@N02/
 
Last edited:
Is there anybody here who knows what the latest generation of sensors look like BEFORE one subtracts FPN? I'm wondering what FPN cancelling really does to an image. When I was trying to solve banding issues on my old Nikon D4 I realized how the "Raw" really isn't raw, it's rewritten with info from the dark edge pixels, and of course in many cases this means that the low order bits are mush.
Are you asking what the captures look like before correlated double sampling and gain mapping? I don’t think that is possible without special equipment and inside knowledge. However, i don’t see why those things are destructive to image quality.
Did you Jim study Pentax K3 III monochrome ? any links if you did? and what you think of it?
Havent tested it.
I did not find an answer to this question: aside from better light gathering and reduced noise at a chosen ISO by not including the CFA is there any more benefits to dedicated monochrome sensor for B&W ?
Yes. Look up my Leica Q2M posts on my blog.

--
https://blog.kasson.com
 
Last edited:
As shown earlier the performance of the two pixels is comparable under the same Exposure in spite of their and their format's size difference.
I'm not sure about the last sentence. The Phase One IQ4 also has small pixels, but looks horrible at ISO25k in some parts of the image. It's definitely not optimized for that, I know.
Pixel size and read noise of the IQ4 150 look similar to 7RM4, other than those dips, which look like some sort of pre-processing.

From photostophotons.com. Dips suggest pre-processing.
From photostophotons.com. Dips suggest pre-processing.

According to Eric Fossum values less than 1 e- should be treated with suspicion. Is it the same Sony pixel configuration?
That's the theoretic measurement. See here, the noise floor seems to be depending on the sensor location. The charge from the pixels is directed in direction of the short sensor width to get to the AD-converters, right?. There seems to be a problem at the lower part of the image.


If I think more about it: They've used 1/2500th of a second, which seems to be quite high for a leaf shutter lens. Maybe it was taken with electronic shutter. My trusty E-M1 has the same kind of artifact at ISO3200 in electronic shutter. So maybe that's not the problem of the IQ4 back, but more a kind of user error.
Maybe I didn't get your point. You think because of the OM-3 has 1/4000th at ISO12k and the A7RM4 has 1/5000th at ISO50k is an evidence that the OM-3's indicated ISO-value is simply "to optimistic"?
If the pixel size, lighting and f-number are similar, then they roughly saw the same Exposure (regardless of ISO, which is an independent variable: the number of incoming photons is determined by parameters that do not involve the sensor). Images captured by different formats with the same Exposure are not 'equivalent'.
I may have a language problem there, as the first and the second sentence don't connect in my brain. Exposure shouldn't be a function of pixel size, but of time. Typo?

What I meant with "equivalent' was depth-of-field-equivalent. Since F1.2 is often not really usable in an artistic kind of meaning (background melted away, only one eye / one face in focus or simply nose tip in focus).

But I think that for noise comparison I don't need equivalence. I assume that ISO200 on camera A is about ISO200 on camera B and that image noise is not much of a difference, whether you shot at 1/4000 or 1/50 s.
If you look at the shutter speeds for different ISOs at the OM-3, you notice, that ISO12k, 25k and 50k have the same shutter speed.

Maybe they used a ND-Filter or they have dimmed down the lighting a bit. For A7RM4 they indicate 1/5000th at ISO12k, 25k, 50k and 100k. I have none of both cameras. But I would assume, that the indicated ISO values are roughly correct to about 1/3 of a stop. 20 years ago that was a different problem.

Thankfully we have the same effective focal range and therefor roughly the same distance between the lens and the subject for A7RM4 and OM-3. I'm totally sure, that the amount of light getting to the sensor is strictly correlated to the exposure time. I'm quite sure, that the second parameter is the area of the entrance pupil of the lens. (=pi/4*(focal_length/F_Stop)^2).
Ah, I see where the fallacy might come in, one focal length too many. The camera equation is in fact

Hi = Exposure, L = Luminance, t = Exposure Time, Nw = working f-number
Hi = Exposure, L = Luminance, t = Exposure Time, Nw = working f-number

You can find its derivation here: https://www.strollswithmydog.com/camera-equation-angles/
I hoped to see these formulars never again. *g*
E is a kind of intensity, meaning power per area. What I talked about is power, without the connexion to the area. The basic principle is written is the part named "The cameras perspective". It the entrance pupil is the same (and subject distance and FOV), they get the same amount of light for a scene. (
I imaging that entrance pupil as a black hole, that directs all the subject light hitting it, onto the sensor (minus some losses). That area of an 85 mm F5.6 is four times larger than the 42.5 mm F5.6 on the OM-3. So for same shutter speed and same F-stop, the A7RM4 sensor should receive four times as much light. With the lighting of the test chart being variable, I can just rely on the ISO value as roughly equal between different manufacturers. (I know, that some have more and some have less highlight headroom.)

There are flat-top areas in the GretagMcBeth chart of these images. If tried some Octave experiment to them (downsampled all images to reasonable 8 MP before with good old Photoshop). But it's not just the standard-deviation, one has also take into account how fine or rough the gain is. Don't know, how to convert that subjective look-and-feel into a number.
There are lots of uniform patches in the Studio Scene which, though not ideal, will give you results not too far out of the ballpark, as shown earlier. With your own target, try to use one with minimal rugosity and take a slightly out of focus image. Or better yet for random noise, take two of them and subtract them, then compensate for the additional noise by dividing the standard deviation by √2
It's difficult to do a comparison, if you're having just one camera what can be considered modern. ;)
I don't think, that Kodak, Sony or Fujifilm CCDs are of any importance today.
 
Tried it out. Octave doesn't know rawread.
But maybe that's not the point. The values of Eric Fossum are very likely accurate. It's more about the interpretation of these values, since they only measure one pixel. I'm not sure, whether you can compare them directly and map them on a single axis scale with good and bad. At least for dim light.

Maybe I'm a bit to much focused to low light work, but the thread is titled in that way...

To let out the question of 645 vs. 135 mm vs. ... format, let's to a visual thought experiment regarding high-ISO-noise with some simplifications (monochrome filter pattern aka no bayer pattern, same electronic technology, same microlens efficiency, no dead zones between pixels, ...). The sensor sized is fixed:

We have a 12 MP "large-pixel" sensor of a given size with 1 electron read noise per pixel (and a full-well capacity large enough to be ignored at ISO xxk). For a given exposure (let's say F2.8 at 1/100 s) I get an image, that has some noise.

If we now decrease the size of each pixel to 25 %, so that we have a 48 MP "small pixel" sensor with the same read noise per pixel. What would be changed in the image? The charge that was accumulated in one large pixel is now split into four pixels. So I get more poison shot noise per pixel, but I get within the area of one large pixel also four times the read noise. Both of these noise sources are finer grained in the image.

One can reiterate that procedure, by dividing that small pixels again into four "tiny pixel". Now we have an 192 MP sensor, but with 16x the read noise per large pixel and far finer grain. If one does that reiteration again and again, I would end with something that's purely dominated by the read noise of the sensor.

Without mathematical proof: I would guess that decreasing the pixel size of a camera far below the "pixel size" of the visual reception form (print, screen, ...) is not useful. At least from a noise perspective.

If that's valid, one has to take the pixel count of the sensor into account, when interpreting these numbers for high ISO values. In end the read out noise in the image is read-out-noise per pixel times the number of pixels. And all that mashed up with some obscure function that weights noise of different spatial frequency to human perception. As shooting with 4 MP like in the 2000s is not usefull, it feels like there is an optimum.

Maybe that's a bit too far off-topic for a medium format talk. It's not the usual use case.
 
Tried it out. Octave doesn't know rawread.
You can use RawDigger to extract the undemosaiced images, or the individual color planes. Then you won't need rawread.
But maybe that's not the point. The values of Eric Fossum are very likely accurate. It's more about the interpretation of these values, since they only measure one pixel. I'm not sure, whether you can compare them directly and map them on a single axis scale with good and bad. At least for dim light.
It's not good and bad. It's read noise.
Maybe I'm a bit to much focused to low light work, but the thread is titled in that way...
The thread is about read noise. In my photography, that's not a big deal. Photon noise is more of a difficulty.
To let out the question of 645 vs. 135 mm vs. ... format, let's to a visual thought experiment regarding high-ISO-noise with some simplifications (monochrome filter pattern aka no bayer pattern, same electronic technology, same microlens efficiency, no dead zones between pixels, ...). The sensor sized is fixed:

We have a 12 MP "large-pixel" sensor of a given size with 1 electron read noise per pixel (and a full-well capacity large enough to be ignored at ISO xxk). For a given exposure (let's say F2.8 at 1/100 s) I get an image, that has some noise.

If we now decrease the size of each pixel to 25 %, so that we have a 48 MP "small pixel" sensor with the same read noise per pixel. What would be changed in the image? The charge that was accumulated in one large pixel is now split into four pixels. So I get more poison shot noise per pixel, but I get within the area of one large pixel also four times the read noise. Both of these noise sources are finer grained in the image.

One can reiterate that procedure, by dividing that small pixels again into four "tiny pixel". Now we have an 192 MP sensor, but with 16x the read noise per large pixel and far finer grain. If one does that reiteration again and again, I would end with something that's purely dominated by the read noise of the sensor.

Without mathematical proof: I would guess that decreasing the pixel size of a camera far below the "pixel size" of the visual reception form (print, screen, ...) is not useful. At least from a noise perspective.

If that's valid, one has to take the pixel count of the sensor into account, when interpreting these numbers for high ISO values.
You can scale everything to the same print size.
In end the read out noise in the image is read-out-noise per pixel times the number of pixels.
Huh?
And all that mashed up with some obscure function that weights noise of different spatial frequency to human perception. As shooting with 4 MP like in the 2000s is not usefull, it feels like there is an optimum.
Please state your contention so that it can be tested.
Maybe that's a bit too far off-topic for a medium format talk. It's not the usual use case.
--
https://blog.kasson.com
 
Last edited:
Tried it out. Octave doesn't know rawread.
You can use RawDigger to extract the undemosaiced images, or the individual color planes. Then you won't need rawread.
But maybe that's not the point. The values of Eric Fossum are very likely accurate. It's more about the interpretation of these values, since they only measure one pixel. I'm not sure, whether you can compare them directly and map them on a single axis scale with good and bad. At least for dim light.
It's not good and bad. It's read noise.
Well, less read noise is good, isn't it? But there is the "gain size" as another variable.
Maybe I'm a bit to much focused to low light work, but the thread is titled in that way...
The thread is about read noise. In my photography, that's not a big deal. Photon noise is more of a difficulty.
As I wrote, not the typical use case for medium format cameras.
To let out the question of 645 vs. 135 mm vs. ... format, let's to a visual thought experiment regarding high-ISO-noise with some simplifications (monochrome filter pattern aka no bayer pattern, same electronic technology, same microlens efficiency, no dead zones between pixels, ...). The sensor sized is fixed:

We have a 12 MP "large-pixel" sensor of a given size with 1 electron read noise per pixel (and a full-well capacity large enough to be ignored at ISO xxk). For a given exposure (let's say F2.8 at 1/100 s) I get an image, that has some noise.

If we now decrease the size of each pixel to 25 %, so that we have a 48 MP "small pixel" sensor with the same read noise per pixel. What would be changed in the image? The charge that was accumulated in one large pixel is now split into four pixels. So I get more poison shot noise per pixel, but I get within the area of one large pixel also four times the read noise. Both of these noise sources are finer grained in the image.

One can reiterate that procedure, by dividing that small pixels again into four "tiny pixel". Now we have an 192 MP sensor, but with 16x the read noise per large pixel and far finer grain. If one does that reiteration again and again, I would end with something that's purely dominated by the read noise of the sensor.

Without mathematical proof: I would guess that decreasing the pixel size of a camera far below the "pixel size" of the visual reception form (print, screen, ...) is not useful. At least from a noise perspective.

If that's valid, one has to take the pixel count of the sensor into account, when interpreting these numbers for high ISO values.
You can scale everything to the same print size.
No, you can't. I have a 70x50 cm image from an 4 MP camera (black and white) on my wall. Despite all the problems with printing technique, you can see the pixel structure in the grain. It's looks totally different than an 16 MP image with grain.
In end the read out noise in the image is read-out-noise per pixel times the number of pixels.
Huh?
If you count the pure number of electrons, that distort your image.
As written above, more read noise is bad.
And all that mashed up with some obscure function that weights noise of different spatial frequency to human perception. As shooting with 4 MP like in the 2000s is not usefull, it feels like there is an optimum.
Please state your contention so that it can be tested.
It can't be tested in hardware. There is no modern 12 MP Sensor on a e.g. full frame camera sensor, nor any other format. Even the 50 MP Sensor in GFX is a different technology than the 100 MP sensor. That's why this is a thought experiment.
 
That's the theoretic measurement. See here, the noise floor seems to be depending on the sensor location. The charge from the pixels is directed in direction of the short sensor width to get to the AD-converters, right?. There seems to be a problem at the lower part of the image.

https://www.dpreview.com/reviews/im...1&x=-0.6393056057866184&y=-0.9393430351747949
Regardless of where in the images, the 7RM4 looks better than the OM-3 at ISO 25600 when shown at the same size. But as we said this is to be expected since the pixels are very roughly the same and they are supposed to be very roughly getting the same exposure. If you wanted to compare their noise Equivalently, with 1/4 the Exposure for the 7RM4, they would look much more similar.

However, a main point that should come out of this investigation is that there are too many variables in Studio Scene images to allow for this level of scrutiny: DPR adjusts exposure for some shots, they adjust processing for some others, and they use LR to develop the raws, which as mentioned treats different cameras differently even with all sliders zeroed.
If the pixel size, lighting and f-number are similar, then they roughly saw the same Exposure (regardless of ISO, which is an independent variable: the number of incoming photons is determined by parameters that do not involve the sensor). Images captured by different formats with the same Exposure are not 'equivalent'.
I may have a language problem there, as the first and the second sentence don't connect in my brain. Exposure shouldn't be a function of pixel size, but of time. Typo?
Say uniform patches in the deep shadows of two sensors, of the order of 1% of Full Scale, result in a SNR of 15 indicating an approximate mean signal of 15^2= 225 photoelectrons per pixel, per the earlier ISO200 measurement.

Exposure is proportional to e-/unit area, so if the area of the pixels of the two cameras are similar and they produce a similar number of e-, they were roughly equally Exposed. On the other hand if the pixels of the first camera are twice the area of the second it means that they saw half the exposure, all else equal. This has implications for Equivalence.
What I meant with "equivalent' was depth-of-field-equivalent. Since F1.2 is often not really usable in an artistic kind of meaning (background melted away, only one eye / one face in focus or simply nose tip in focus).

But I think that for noise comparison I don't need equivalence. I assume that ISO200 on camera A is about ISO200 on camera B and that image noise is not much of a difference, whether you shot at 1/4000 or 1/50 s.
You do if you want to compare noise from images from different formats when viewed at the same size, as your link to the Studio Scene clearly shows.
E is a kind of intensity, meaning power per area. What I talked about is power, without the connexion to the area. The basic principle is written is the part named "The cameras perspective". It the entrance pupil is the same (and subject distance and FOV), they get the same amount of light for a scene.
E is (ir)radiance in photons/s/m^2. H is Exposure in photons/m^2. The total number of photons collected by the entrance pupil while the shutter is open is ideally the same that ends up on the sensor. The larger the sensor area that they are spread out over, the lower the Exposure

Same number of photons collected by the entrance pupil of the lens result in different Exposure on the sensor when the working f-number is different *
Same number of photons collected by the entrance pupil of the lens result in different Exposure on the sensor when the working f-number is different *

So Exposure depends on the opening angle, which depends on the working f-number of the lens per the linked article.

This means that if two sensors of different formats are lit by the same Exposure, the total number of photons at the entrance pupil of the larger format is higher than that of the smaller one, all else equal. Therefore their noise performance will not be Equivalent when their final images are displayed at the same size - and the larger format will look better, as your link above clearly shows.

Jack

* From: https://www.strollswithmydog.com/equivalence-focal-length-fnumber-diffraction/
 
Last edited:
That's the theoretic measurement. See here, the noise floor seems to be depending on the sensor location. The charge from the pixels is directed in direction of the short sensor width to get to the AD-converters, right?. There seems to be a problem at the lower part of the image.

https://www.dpreview.com/reviews/im...1&x=-0.6393056057866184&y=-0.9393430351747949
Regardless of where in the images, the 7RM4 looks better than the OM-3 at ISO 25600 when shown at the same size. But as we said this is to be expected since the pixels are very roughly the same and they are supposed to be very roughly getting the same exposure. If you wanted to compare their noise Equivalently, with 1/4 the Exposure for the 7RM4, they would look much more similar.
At the same ISO with the same f-stop and exposure time, the A7RM4 gets four times the light or has about half the DoF of an OM-3.
I doubt that the A7RM4 looks the same at ISO25k and ISO100k. That's not what I see having both side by side in my screen.
With an ideal sensor, ISO25k on µ4/3s should look like ISO100k on fullframe with respect to noise (assuming enough image resolution for print size).
However, a main point that should come out of this investigation is that there are too many variables in Studio Scene images to allow for this level of scrutiny: DPR adjusts exposure for some shots, they adjust processing for some others, and they use LR to develop the raws, which as mentioned treats different cameras differently even with all sliders zeroed.
I know, but have nothing better. Maybe dcraw would be an alternative.
If the pixel size, lighting and f-number are similar, then they roughly saw the same Exposure (regardless of ISO, which is an independent variable: the number of incoming photons is determined by parameters that do not involve the sensor). Images captured by different formats with the same Exposure are not 'equivalent'.
I may have a language problem there, as the first and the second sentence don't connect in my brain. Exposure shouldn't be a function of pixel size, but of time. Typo?
Say uniform patches in the deep shadows of two sensors, of the order of 1% of Full Scale, result in a SNR of 15 indicating an approximate mean signal of 15^2= 225 photoelectrons per pixel, per the earlier ISO200 measurement.

Exposure is proportional to e-/unit area, so if the area of the pixels of the two cameras are similar and they produce a similar number of e-, they were roughly equally Exposed. On the other hand if the pixels of the first camera are twice the area of the second it means that they saw half the exposure, all else equal. This has implications for Equivalence.
See below.
What I meant with "equivalent' was depth-of-field-equivalent. Since F1.2 is often not really usable in an artistic kind of meaning (background melted away, only one eye / one face in focus or simply nose tip in focus).

But I think that for noise comparison I don't need equivalence. I assume that ISO200 on camera A is about ISO200 on camera B and that image noise is not much of a difference, whether you shot at 1/4000 or 1/50 s.
You do if you want to compare noise from images from different formats when viewed at the same size, as your link to the Studio Scene clearly shows.
Don't understand that, linguistically. I don't need a certain aperture to judge noise of an sensor at ISOxx.
E is a kind of intensity, meaning power per area. What I talked about is power, without the connexion to the area. The basic principle is written is the part named "The cameras perspective". It the entrance pupil is the same (and subject distance and FOV), they get the same amount of light for a scene.
E is (ir)radiance in photons/s/m^2. H is Exposure in photons/m^2. The total number of photons collected by the entrance pupil while the shutter is open is ideally the same that ends up on the sensor. The larger the sensor area that they are spread out over, the lower the Exposure

Same number of photons collected by the entrance pupil of the lens result in different Exposure on the sensor when the working f-number is different *
Same number of photons collected by the entrance pupil of the lens result in different Exposure on the sensor when the working f-number is different *

So Exposure depends on the opening angle, which depends on the working f-number of the lens per the linked article.
It's may be a bit confusing for others, if we mix opening angle (aka numerical aperture) with angle of view.
This means that if two sensors of different formats are lit by the same Exposure, the total number of photons at the entrance pupil of the larger format is higher than that of the smaller one, all else equal. Therefore their noise performance will not be Equivalent when their final images are displayed at the same size - and the larger format will look better, as your link above clearly shows.
I think, we really have a language problem here. Probably my fault.
You can shot at "equivalent exposure" (same exposure time, same f-stop) with two formats. Than the images will look different on a 10x15 cm print, as DoF and background blur changes.
You can also shoot at "equivalent DoF" with two formats. Than you have to stop down the lenses to meet a certain entrance pupil / DoF. If you do so, these images will look the same on a 10x15 cm print.

So it depends, what should be equivalent. I meant DoF equivalence. Tried out a Nikon F camera with an 1.4 lens. For me that wasn't useable.
 
Last edited:
Tried it out. Octave doesn't know rawread.
You can use RawDigger to extract the undemosaiced images, or the individual color planes. Then you won't need rawread.
But maybe that's not the point. The values of Eric Fossum are very likely accurate. It's more about the interpretation of these values, since they only measure one pixel. I'm not sure, whether you can compare them directly and map them on a single axis scale with good and bad. At least for dim light.
It's not good and bad. It's read noise.
Well, less read noise is good, isn't it? But there is the "gain size" as another variable.
Maybe I'm a bit to much focused to low light work, but the thread is titled in that way...
The thread is about read noise. In my photography, that's not a big deal. Photon noise is more of a difficulty.
As I wrote, not the typical use case for medium format cameras.
I wrote this post in response to a question from an MF Forum member about where read noise comes from. I guess you can't please everybody.
 
Tried it out. Octave doesn't know rawread.
You can use RawDigger to extract the undemosaiced images, or the individual color planes. Then you won't need rawread.
But maybe that's not the point. The values of Eric Fossum are very likely accurate. It's more about the interpretation of these values, since they only measure one pixel. I'm not sure, whether you can compare them directly and map them on a single axis scale with good and bad. At least for dim light.
It's not good and bad. It's read noise.
Well, less read noise is good, isn't it? But there is the "gain size" as another variable.
What the heck is gain size?
 
You can scale everything to the same print size.
No, you can't. I have a 70x50 cm image from an 4 MP camera (black and white) on my wall. Despite all the problems with printing technique, you can see the pixel structure in the grain.
Then you're not using the right resampling algorithm. But I had in mind scaling down, not up..
It's looks totally different than an 16 MP image with grain.
There is no grain in a digital image. Are you talking about the character of the noise?
 

Keyboard shortcuts

Back
Top