Sensors and Pixels...

sdkid

New member
Messages
3
Reaction score
0
I am looking to replace my trusty 7D MKii...

OK... in the world of surveillance cameras, cramming more pixels onto the SAME SIZE SENSOR is a terrible idea, as it divides the light available and each pixel gets less light. That wonderful 8mp surveillance cam ends up really being worthless at night where a 2mp sensor of the same physical size and same lighting conditions will give awesome quality.

So that leads me to this:
The full-frame sensors in cameras are roughly 36x24 mm. Within that same approximate physical size, a Canon R5 has ~45mp, and a Sony A7R IVa has about 62mp. What are the implications of that? Do the lessons seen in surveillance camera sensors also apply to the sensors in these cameras?
 
Last edited:
It looks like a good portion of the answer to my question is found in this ~8 year old article, which is a very interesting and nerdy-cool level read...

https://www.dpreview.com/articles/5365920428/the-effect-of-pixel-and-sensor-sizes-on-noise/2
If by "nerd" you mean someone who understands the subject, then yes, that article probably explains it correctly. I haven't read that article, but Richard Butler has shown a solid understanding of the subject.

When choosing a system, you need to get past poorly constrained generalities such as full-frame sensors capturing more light and therefore giving less noise. In this case, full frame probably does offer lower noise, but that assumes that it is coupled with the right lens.

Remember, it's not just the sensor that captures more light. It's also the lens. In the context of astrophotography, a full frame sensor just gets you a wider view. What is all-important for astrophotography is the lens aperture (diameter, not f-number!). A wider aperture captures more light.
 
Last edited:
Forget all that. It's just a distraction. Choose a camera based on the usual traits: image quality, features, handling, etc. If you get bogged down in over-analysing things you'll never actually get a cam.
Actually, astrophotography is one area where a solid understanding of the fundamentals can make a big difference. The OP is right to ask questions.

And the answer to his question is that the pixel size has virtually nothing to do with noise.
 
I will let others argue the specifics of your question.

I have the 7D Mark II, the R7, and the R6. The R7 is a little better in regards to noise relative to the 7D MII and that's with a sensor that's significantly higher resolution. But, the R6 is absolutely stunning when it comes to noise. Of course that's at "only" 20 megapixels.

For night sports I use the R6 almost exclusively. You can draw your own conclusions.
What's missing from many of the comparisons is 'all things being equal'.
There is a theoretical advantage to larger pixels, but there is so many other factors (image processing, construction of the sensor--stacking for example) that affect the outcome that a blanket statement really does not apply.
  • This is why I've gone back after a night event where I've used both lenses on both bodies and done a blind evaluation of noise so that I'm not biased. For me the R6 produces better results. In fact I'm often surprised how good they are. The R7 is certainly better than the 7Dii. Ultimately we must be confident in the field with our gear and I'm sure some are comfortable with the R7's performance.
I cringe when I read something like this. I don't know how you compared, but lots of people manage to fool themselves with this test. In general, to get equal results you don't use the same settings with different sensor sizes. There are 5 parameters that need to be controlled to make this comparison comparison: focal length, f stop, lens aperture, exposure time, and the ISO setting. They are not all independent. You can't make them all equal, but but which ones you should make equal depends very much on what you are trying to photograph.

It may be that you did the comparison for astrophotography with 1.5 to 1.6 times the focal length for FF. In this case I can confirm that full frame has has a 1 1/3 to 1.5 stop advantage in light collected, and you could indeed find lower noise. Under other conditions, particular if depth of field is a consideration, there may be no advantage at all.

I'm not going to argue or explain further. Maybe you understand this subject, but in case you don't, I can call your attention to two very articles on the subject: Equivalence by Joseph James and What is Equivalence? by Richard Butler.

EDIT: I'd like to echo what John Sheehy said, because it states an essential point particularly well. You can't just compare sensors. If you have a well-defined task with well-defined assumptions, you can compare systems, but the lenses are part of the systems.
 
Last edited:
I will let others argue the specifics of your question.

I have the 7D Mark II, the R7, and the R6. The R7 is a little better in regards to noise relative to the 7D MII and that's with a sensor that's significantly higher resolution. But, the R6 is absolutely stunning when it comes to noise. Of course that's at "only" 20 megapixels.

For night sports I use the R6 almost exclusively. You can draw your own conclusions.
What's missing from many of the comparisons is 'all things being equal'.
There is a theoretical advantage to larger pixels, but there is so many other factors (image processing, construction of the sensor--stacking for example) that affect the outcome that a blanket statement really does not apply.
  • This is why I've gone back after a night event where I've used both lenses on both bodies and done a blind evaluation of noise so that I'm not biased. For me the R6 produces better results. In fact I'm often surprised how good they are. The R7 is certainly better than the 7Dii. Ultimately we must be confident in the field with our gear and I'm sure some are comfortable with the R7's performance.
You haven't told us how you compare, or even what you mean by comparing them, which is important because comparing sensors of different sizes is practically meaningless; you need a lens and a purpose and compare for that purpose.

Does the R7 get a lens perfectly matched to it for intended purpose, or does it function as backup, with whatever lens was available?

One obvious win for a larger sensor when you can't shoot at base ISO is when you use the same zoom lens on both, and use the same angle of view on each, but you're not giving the smaller sensor much to work with, forcing it to the inefficient wide end of a zoom.

--
Beware of correct answers to wrong questions.
John
http://www.pbase.com/image/55384958.jpg
 
Last edited:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
 
Bob A L said:
Mark S Abeln said:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

381d298ee9f845869ded24bf04001a96.jpg

... is the same as the S/N ratio of this one large pixel:

4c0c5563de2c4882b30b48b19b8c8aa0.jpg

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
 
Last edited:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

View attachment 3405429

... is the same as the S/N ratio of this one large pixel:

View attachment 3405430

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
If I've got it correct in my head, that large 'pixel' also has it's own internal color information without additional interpolation.
 
Last edited:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

View attachment 3405429

... is the same as the S/N ratio of this one large pixel:

View attachment 3405430

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
If I've got it correct in my head, that large 'pixel' also has it's own internal color information without additional interpolation.
And the aggregated 'color information' of the four small ones would be nearly identical if not identical, assuming both areas capture the same portion of the same image. If you think any difference would be meaningful, you should explain why you think so.
 
Last edited:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
A display with 4x the pixel density might be even greater.
 
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

View attachment 3405429

... is the same as the S/N ratio of this one large pixel:

View attachment 3405430

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
If I've got it correct in my head, that large 'pixel' also has it's own internal color information without additional interpolation.
And the aggregated 'color information' of the four small ones would be nearly identical if not identical, assuming both areas capture the same portion of the same image. If you think any difference would be meaningful, you should explain why you think so.
Deception, algorithms & math. Just because two squares are the same size means nothing. You can’t change physics, but you can use knowledge about physics to bend the outcome to your desired result

Yes, pixel binning is effective to a certain degree, and that’s the reason why it’s used these days.

(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
 
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

View attachment 3405429

... is the same as the S/N ratio of this one large pixel:

View attachment 3405430

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
If I've got it correct in my head, that large 'pixel' also has it's own internal color information without additional interpolation.
And the aggregated 'color information' of the four small ones would be nearly identical if not identical, assuming both areas capture the same portion of the same image. If you think any difference would be meaningful, you should explain why you think so.
Deception, algorithms & math. Just because two squares are the same size means nothing. You can’t change physics, but you can use knowledge about physics to bend the outcome to your desired result

Yes, pixel binning is effective to a certain degree, and that’s the reason why it’s used these days.

(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
I guess that was supposed to be helpful in some way.
 
Last edited:
That isn’t true.

In poor lighting you just need to downsample the 8 MP down to 2 MP, and the result will be less noisy than the 2MP sensor due to the lack of color aliasing by the color filter array; smaller pixels may also be less noisier in general but I can’t explain the mechanism. And in good light you get more detail, so it’s a win for the 8 MP sensor.
So four lousy pixels combined make 1 great pixel then.
It's astonishing that the same tiny handful of DPR posters continue to deny or dismiss this simple concept.

The aggregated S/N ratio of these four small pixels ...

View attachment 3405429

... is the same as the S/N ratio of this one large pixel:

View attachment 3405430

Extend that principle to an entire image.

(This is about shot noise, which is acknowledged as the major source of noise in the preponderance of cases.)
If I've got it correct in my head, that large 'pixel' also has it's own internal color information without additional interpolation.
And the aggregated 'color information' of the four small ones would be nearly identical if not identical, assuming both areas capture the same portion of the same image. If you think any difference would be meaningful, you should explain why you think so.
Deception, algorithms & math. Just because two squares are the same size means nothing. You can’t change physics, but you can use knowledge about physics to bend the outcome to your desired result

Yes, pixel binning is effective to a certain degree, and that’s the reason why it’s used these days.

(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
If somebody wants to stop blabbering around internet forums, and instead wants to use & sharpen their mind, there is a succinct introduction to binning and why it might be beneficial in certain cases at http://www.starrywonders.com/binning.html
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Basic geometry, honestly don’t understand your question?
I'm not sure how the 'exposure triangle' may not follow the basic geometry. The only function of the triangle in the 'exposure triangle' is to be a triangle...

Having said that, the exposure triangle is a pretty bad model overall, and illustrations are usually quite confusing.
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Basic geometry, honestly don’t understand your question?
I'm not sure how the 'exposure triangle' may not follow the basic geometry. The only function of the triangle in the 'exposure triangle' is to be a triangle...

Having said that, the exposure triangle is a pretty bad model overall, and illustrations are usually quite confusing.
It has already been shown several times that while the illustration of the “exposure triangle” is a triangle, it is a very bad model, because it doesn’t follow the logic of geometry when you adjust the photography-related variables. They (geometry & exposure) have their own underlying logic that only partially aligns. If it would be a good visualization both would align.

But again, that’s not my point here. The point is just because you show a visualization (a square in this case) means nothing. The proof (or rather rebuttal) is the exposure triangle, as it is a visualization, but it means nothing in the end as it doesn’t map 1:1 to photography principles. When you take four pixels, and apply a binning algorithm to them, it is not the same as having n/4 pixels in total.
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Try reading Euclid. Dover Books have a good edition of this remarkable work.

But triangles are not helpful for understanding exposure. Three independent variables make a cuboid (X,Y, Z), not a triangle.

Don
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Try reading Euclid. Dover Books have a good edition of this remarkable work.
Thank you. I haven't read Euclid, yet I had advanced courses of analytical geometry at a Uni. Many years ago. Generally you don't study geometry by Euclid works directly.
But triangles are not helpful for understanding exposure. Three independent variables make a cuboid (X,Y, Z), not a triangle.
They're not independent though as there's also the scene luminance and all four are related through an equation.
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Try reading Euclid. Dover Books have a good edition of this remarkable work.
Thank you. I haven't read Euclid, yet I had advanced courses of analytical geometry at a Uni. Many years ago. Generally you don't study geometry by Euclid works directly.
But triangles are not helpful for understanding exposure. Three independent variables make a cuboid (X,Y, Z), not a triangle.
They're not independent though as there's also the scene luminance and all four are related through an equation.
I think they are independent, unless you specify that the image lightness must remain constant. For instance, rotating the shutter speed knob doesn't make the lens aperture ring rotate.

The exception is cameras where you set an EV number, as on a Hasselblad film camera.

Don
 
(just because the exposure triangle looks like a triangle, doesn’t mean it follows the logic of triangles; what you see often deceives you)
Just curious what is it, the logic of triangles? How does one follow the logic of triangles?
Try reading Euclid. Dover Books have a good edition of this remarkable work.
Thank you. I haven't read Euclid, yet I had advanced courses of analytical geometry at a Uni. Many years ago. Generally you don't study geometry by Euclid works directly.
But triangles are not helpful for understanding exposure. Three independent variables make a cuboid (X,Y, Z), not a triangle.
They're not independent though as there's also the scene luminance and all four are related through an equation.
I think they are independent, unless you specify that the image lightness must remain constant. For instance, rotating the shutter speed knob doesn't make the lens aperture ring rotate.
They're not independent if you have anything on 'auto', which is what exposure triangle is trying to illustrate.
The exception is cameras where you set an EV number, as on a Hasselblad film camera.
That'll be any camera that has auto metering bound to exposure settings.
 

Keyboard shortcuts

Back
Top