Canon R5 vs R6mII

shroob

Member
Messages
14
Reaction score
5
Hello everyone,

I currently use a Nikon D500 with Tamron 150-600 G2 and have done for about 2 years. This was my first camera and I'm looking to upgrade. I have decided on either a Canon R5 or R6II with the RF100-500 lens. I'm also considering getting the 800 f11 or 1.4x TC.

I mainly shoot birds. Occasionally small mammals. However, birding is my primary hobby.

I cannot decide between the R5 and the R6II. I was hoping if I asked here people would help make the decision for me. Please note, where I am the price difference is minimal.



I am currently leaning towards the R5 simply because of the extra megapixels which will allow me to crop more. The birds are often quite far away.

For all other features the R6II is meant to be better (autofocus, low light, FPS, battery life, pre-burst shooting). However, I'm not sure if 24 MP will be enough if I need to crop.

Please could you give me your thoughts and recommendations on which camera to buy.

Thanks in advance.
 
But why not the R7 which is superb for birding? AF is inherited from R3. You can't go better than that unless you are shooting regularly at ISO12800 or higher.

My R7 at 6400 ISO produces terrific results. Best of all I seldom have to crop.
What would be the better options above ISO 12800? Canon doesn't have any FF sensors which, in crop mode, have less noise than the R7 at high ISOs (unless the R6-II does, and hasn't been recognized yet). To get less noise with those Canon FF sensors, you need a bigger lens that actually uses that greater sensor area (or use the longer end of a zoom range when you would be zooming out with APS-C). Most people are getting the most useful lenses for themselves based on weight and price, and the lens is really what gets light from subject; the sensor is just a place to record that light.
All good points. Made me think. This is partly why I made the R7 my first purchase when I decided to go mirrorless, followed by the R6.
 
Hello everyone,

I currently use a Nikon D500 with Tamron 150-600 G2 and have done for about 2 years. This was my first camera and I'm looking to upgrade. I have decided on either a Canon R5 or R6II with the RF100-500 lens. I'm also considering getting the 800 f11 or 1.4x TC.

I mainly shoot birds. Occasionally small mammals. However, birding is my primary hobby.

I cannot decide between the R5 and the R6II. I was hoping if I asked here people would help make the decision for me. Please note, where I am the price difference is minimal.

I am currently leaning towards the R5 simply because of the extra megapixels which will allow me to crop more. The birds are often quite far away.

For all other features the R6II is meant to be better (autofocus, low light, FPS, battery life, pre-burst shooting). However, I'm not sure if 24 MP will be enough if I need to crop.

Please could you give me your thoughts and recommendations on which camera to buy.

Thanks in advance.
For me, the answer is easy. I’m a birder too (love BIFs especially), and the R5 is IMHO the best choice, even with the R6ii in the mix now (R3 notwithstanding, but not in the running here).

The R5 simply ticks all of the right boxes for me. The resolution is a great match for the 100-500 (+/- 1.4x), the AF is just configurable enough (I like “Auto initial” for birding), the body is tough and better weather-sealed but not too heavy, the controls are excellent. I like being able to quickly switch Custom Shooting Modes by just pressing the “M-fn” button (ie switching from perched birds to BIFs instantly).

The IQ is so good that sometimes it feels like I can crop in forever.

I like shooting with the silent eShutter primarily (helps with tracking). Out of 100,000+ shots so far, I’ve only seen a handful adversely affected by rolling shutter effects. BTW, love the 20 fps (I keep my R6ii set to that fps too).

Interestingly enough, I have 4 close hiking/shooting buddies who are Nikon shooters (they all own both D500’s and D850’s). Two upgraded to the Z9 this year (but shhhh, IME the R5 is still the better birder ;-) ). Same goes for the R5 vs the R7 too. The R7 has too many “caveats” for me for birding (ie the rolling shutter and lesser weather-sealing esp), but I love it for macros. On a strict budget though, the R7 is really a great bargain as an all-round shooter.

The R6 (now the R6ii) is my general purpose camera (events, sports, etc). I know I’m going to like the extra AF configurability for those types of shooting. Bottom line though, the R5 + 100-500 + 1.4x is a match made in heaven. Simply can’t go wrong there.

R2
But why not the R7 which is superb for birding? AF is inherited from R3. You can't go better than that unless you are shooting regularly at ISO12800 or higher.

My R7 at 6400 ISO produces terrific results. Best of all I seldom have to crop.
No the person you are quoting but I considered the R7 until I watched Youtube videos of people who use them. Especially Duade Paton's review with the rolling shutter (
). Equally, I do not live in a sunny environment, low light performance is important to me.

The R7 is a fantastic camera, I'm sure I'd be very happy with one. Just, I went for the R6mII as I wanted to try full frame for the first time.
I find the high ISO performance of R7 stellar. Next time my R6 at up to 6400 ISO, it delivers detailed images with very acceptable level of noise / grain. And the R6 is supposed to be the best, if not one of the best, Canon cameras when it comes to high ISO performance.

Surely you will be happy with the R6 Mark II because it's an improved version of what was already a great camera in the first place.
 
Sometimes I get what I can. Sometimes I have time. Sometimes I don't.
 
With the same duck photons, the duck photon noise will be the same, regardless of the number or size of pixels. More pixels will mean more pixel resolution, though.
Are you perhaps referring to the noise created by the detection ofight and conversion to an electrical signal? It is the electrical side where we should create a system boundary, light would be outside.
Nothing electrical was being discussed. What exits the lens and hits the sensor are photons, and if we can't model for photons correctly regarding photographic and sensor parameters then we can't hope to model anything else correctly that builds upon that model. The conversion rate to electrons is fixed in each camera for the most part (some microlens exceptions at very low f-ratios are possible), but quantum efficiency is pretty much standard for the last decade and varies very little, compared to differences in sensor sizes and pixel sizes. All speculation about boundary effects is moot regarding total photons, when quantum efficiency is the same. If there were any edge effects that lost light, we would see it in the QE.

As far as spatial displacement of photons due to edge effects at the photosites are concerned, that is only an issue with color channel crosstalk with very small pixels used in very-high-MP cellphone sensors, and is only a problem because classic single-pixel Bayer CFAs, but that is solved by "Quad Bayer", which makes a larger percentage of photons that pass through one filter but cross into the photowell of another cross into one with the same color filter. As far as spatial resolution is concerned, however, such as with a monochrome sensor, crosstalk is not as degrading to resolution as you would think, because the smaller the pixels, the less displacement there is when a photon crosses a boundary, since each pixel implies that it captured light in its center, and the center of smaller pixels are closer to neighbor pixels than with larger pixels.
"Tonal information" is just an abstraction derived from the empirical noise, and if you are thinking of "color tone information", then the color filter properties play a role, too. Nothing extra to see there. The Bayer CFA is a pain in the butt (another "situational rut") when trying to do spatial thought experiments, but let's say we had a Foveon-like sensor, but more ideal in color filtration; RGB each pixel. You don't lose any "tonal information" by having more resolution due to pixel density, because if you actually needed to know what larger pixels would do (and you don't need to know that, anyway, as filtration is an alternative to binning that maintains sampling quality), all you would have to do is add them to get a larger pixel. There is nothing purer to be had by capturing the photons in larger pixels in the first place.
I was following you until the last sentence. What is meant by purer here?
Purity here would be the idea that the same amount of photons (or photons-turned electrons) is better in one large photosite rather than spread over multiple smaller ones; Quarkcharmed's alleged superior "tonal information". The fact is, there is LESS information per unit of sensor area with larger pixels. Larger-pixel information can be created from smaller-pixel information, but the reverse is not possible.
If you drop what seems to be about a dozen marbles per container into a grid of square containers, and didn't notice that there was a shallow divider in the bottom of each container that created 4 "sub-boxes", would your information be ruined in any way, compared to not having those dividers? Think hard on that.
I think demanding people should think in certain ways is not a positive behaviour.

If I draw the boundary again that keeps the light, or marbles outside.
What does drawing it "again" mean? When did you first draw it? What did you draw it on? Boundary on the larger box or the smaller divider? Larger box in isolation or in a grid of them? People need to think about their statements and questions and make sure they are full qualified if we're ever going to have any good come out of these conversations before the thread hits 150 posts. I don't know what you're picturing in your head, and I am not telepathic. As a guess, I might think that you are talking about losing photons originally headed to a photosite to the outside, but the outside would be another pixel, and the loss goes both ways, for no net loss of light, and a very mild anti-alias filter is created by the crosstalk.
What I am interested in is the boxes themselves. I may have misunderstood but are you considering the walls themselves which I assume would not be part of the detection system? Am I on the same track as you?
There isn't as much of a "wall" issue as some people like to think, with the use of microlenses. Remember, we already know that there is actually a very small range of quantum efficiency with current APS-C and FF sensors, and the photon noise is actually modulated by the square root of that small difference. This is a much smaller difference than the claimed differences of very different pixel sizes.

If you are talking about "crosstalk", that does not seem to be an issue with R7-size pixels or larger, because if it was, the R7 would have a lot more chromatic noise or much weaker colors with the same exposure times area, but it does not. R7 color and chromatic noise at the pixel level at ISO 3200 are just as good as the R6 at ISO 12800, where the two cameras get the same amount of photons per pixel.
 
Last edited:
PUPIL/DISTANCE. Start thinking that way instead of exposures, ISO, f-ratios, etcetera, and you won't be as confused, and have a reliable gauge to check conclusions (drawn with other models) against. Call it a reality check.
When you compare sensors and image quality, what matters is information.
Agree so far. More photons is more information, and more resolution is also more information. Two types of information.
And the amount of information per duck is the exposure per duck.
No; it is the area size of the duck on the sensor, times the exposure, which determines the amount of photon information.
I basically meant the same.
It's not what you wrote, which was not correct, and is one of the core sources of confusion; the myth that exposure, in and of itself, is a metric of noise or IQ.
'Exposure' = 'energy per unit area' = 'information per unit area', as in this case the energy of the visible light is information. More exposure - more information, although it also includes photon noise. It's not the only metric because there are electronic circuits in camera that add more noise (and therefore reduce the amount of usable information). Also there are such things as quantum efficiency and well capacity that don't allow the system to capture all incoming photons.
Exposure is just one way of looking at light that tells us ZERO about total light used for an image,
How so? You just multiply it by the area of the sensor.
and has mainly been a historical concern only because of the fact that certain media only work optimally with exposure in a certain range, but the spatial expanse of that media or the size of the subject on it have just as much bearing on noise IQ as exposure.
Which camera retains the most information per duck in its raw file given the same amount of light from the duck that reaches the sensor.
You haven't established what you mean by "amount of light". Are you talking exposure, or total duck photons? You seem to be talking about exposure, above.
No, duck photons, because they carry the information. I was considering it from the informational point of view.
With the same duck photons, the duck photon noise will be the same, regardless of the number or size of pixels. More pixels will mean more pixel resolution, though.
Let's say we have two grids of square buckets, one with 4x the bucket density of the other, and we dropped a million ball bearings drawing the shape of a duck of the same size on both. The grid with the larger buckets will have on average, 4x as many ball bearings as the grid with the smaller buckets, with the same total number of ball bearings, but the grid with the smaller buckets WILL HAVE MORE INFORMATION about resolution.
But less tonal information (dynamic range). You can't change the total information by swapping the sensors, but you can change whether you get more spatial or tonal information.
"Tonal information" is just an abstraction derived from the empirical noise, and if you are thinking of "color tone information", then the color filter properties play a role, too. Nothing extra to see there. The Bayer CFA is a pain in the butt (another "situational rut") when trying to do spatial thought experiments, but let's say we had a Foveon-like sensor, but more ideal in color filtration; RGB each pixel. You don't lose any "tonal information" by having more resolution due to pixel density, because if you actually needed to know what larger pixels would do (and you don't need to know that, anyway, as filtration is an alternative to binning that maintains sampling quality), all you would have to do is add them to get a larger pixel. There is nothing purer to be had by capturing the photons in larger pixels in the first place. If you drop what seems to be about a dozen marbles per container into a grid of square containers, and didn't notice that there was a shallow divider in the bottom of each container that created 4 "sub-boxes", would your information be ruined in any way, compared to not having those dividers? Think hard on that.
I need to think of that. So you're saying that by increasing the pixel destiny we only win and get/retain more information. That'd mean larger pixels lose some information compared to the smaller pixels, even though they would capture the same amount of photons (considering there's no clipping).

--
https://www.instagram.com/quarkcharmed/
https://500px.com/quarkcharmed
 
Last edited:
Hello everyone,

I currently use a Nikon D500 with Tamron 150-600 G2 and have done for about 2 years. This was my first camera and I'm looking to upgrade. I have decided on either a Canon R5 or R6II with the RF100-500 lens. I'm also considering getting the 800 f11 or 1.4x TC.

I mainly shoot birds. Occasionally small mammals. However, birding is my primary hobby.

I cannot decide between the R5 and the R6II. I was hoping if I asked here people would help make the decision for me. Please note, where I am the price difference is minimal.

I am currently leaning towards the R5 simply because of the extra megapixels which will allow me to crop more. The birds are often quite far away.

For all other features the R6II is meant to be better (autofocus, low light, FPS, battery life, pre-burst shooting). However, I'm not sure if 24 MP will be enough if I need to crop.

Please could you give me your thoughts and recommendations on which camera to buy.

Thanks in advance.
For me, the answer is easy. I’m a birder too (love BIFs especially), and the R5 is IMHO the best choice, even with the R6ii in the mix now (R3 notwithstanding, but not in the running here).

The R5 simply ticks all of the right boxes for me. The resolution is a great match for the 100-500 (+/- 1.4x), the AF is just configurable enough (I like “Auto initial” for birding), the body is tough and better weather-sealed but not too heavy, the controls are excellent. I like being able to quickly switch Custom Shooting Modes by just pressing the “M-fn” button (ie switching from perched birds to BIFs instantly).

The IQ is so good that sometimes it feels like I can crop in forever.

I like shooting with the silent eShutter primarily (helps with tracking). Out of 100,000+ shots so far, I’ve only seen a handful adversely affected by rolling shutter effects. BTW, love the 20 fps (I keep my R6ii set to that fps too).

Interestingly enough, I have 4 close hiking/shooting buddies who are Nikon shooters (they all own both D500’s and D850’s). Two upgraded to the Z9 this year (but shhhh, IME the R5 is still the better birder ;-) ). Same goes for the R5 vs the R7 too. The R7 has too many “caveats” for me for birding (ie the rolling shutter and lesser weather-sealing esp), but I love it for macros. On a strict budget though, the R7 is really a great bargain as an all-round shooter.

The R6 (now the R6ii) is my general purpose camera (events, sports, etc). I know I’m going to like the extra AF configurability for those types of shooting. Bottom line though, the R5 + 100-500 + 1.4x is a match made in heaven. Simply can’t go wrong there.

R2
But why not the R7 which is superb for birding? AF is inherited from R3. You can't go better than that unless you are shooting regularly at ISO12800 or higher.

My R7 at 6400 ISO produces terrific results. Best of all I seldom have to crop.
The R5 is much more suitable for (my style of) birding. Biggies for me are the better weather sealing, reduced rolling shutter, larger buffer, better ergo (personal preference), and better AF response, esp in darker/reduced light. I also find BIFs to be easier to track with the R5’s EVF, though I haven’t nailed down exactly why yet.

I like the R7’s (and R6ii’s) greater AF configurability though, and will be upgrading the R5 to the Mark II when it comes out for sure. Right now the R7 is my dedicated macro body, and is doing great.

R2
 
'Exposure' = 'energy per unit area' = 'information per unit area', as in this case the energy of the visible light is information.

More exposure - more information, although it also includes photon noise. It's not the only metric because there are electronic circuits in camera that add more noise (and therefore reduce the amount of usable information). Also there are such things as quantum efficiency and well capacity that don't allow the system to capture all incoming photons.
My DPR Studio Comparison Tool link demonstrated, however, that the R7 and R6 give about the same amount of visible electronic noise with the same photon input, so there is no electronic noise benefit to those R6 pixels, "despite" being 4x the area.
Exposure is just one way of looking at light that tells us ZERO about total light used for an image,
How so? You just multiply it by the area of the sensor.
The context of this sub-discussion has been focal-length-limited situations for which I have presented what I have found to be a better model (Pupil/Distance, AKA "etendue") for looking at noise issues than the classic frame-based model. You can not assume usage of the entire frame in this context, so the area of a sensor is only useful when you are lucky enough to fill it. Let's be realistic; there is a lot of photography where no one has the lenses necessary to fill a FF sensor or even an APS-C sensor with their desired composition. In such cases, pupil and distance determine total subject photons with a given subject, shutter speed, and ambient light level. The main power of your system to deliver a subject with good subject-level SNR lies not in the body and sensor, but the lens' pupil, subject size, and the distance between them.
"Tonal information" is just an abstraction derived from the empirical noise, and if you are thinking of "color tone information", then the color filter properties play a role, too. Nothing extra to see there. The Bayer CFA is a pain in the butt (another "situational rut") when trying to do spatial thought experiments, but let's say we had a Foveon-like sensor, but more ideal in color filtration; RGB each pixel. You don't lose any "tonal information" by having more resolution due to pixel density, because if you actually needed to know what larger pixels would do (and you don't need to know that, anyway, as filtration is an alternative to binning that maintains sampling quality), all you would have to do is add them to get a larger pixel. There is nothing purer to be had by capturing the photons in larger pixels in the first place. If you drop what seems to be about a dozen marbles per container into a grid of square containers, and didn't notice that there was a shallow divider in the bottom of each container that created 4 "sub-boxes", would your information be ruined in any way, compared to not having those dividers? Think hard on that.
I need to think of that.
It's a very important concept. Obviously, there are real-world reasons why many photographers think that higher pixel densities automatically give more noise "because of physics", because their visceral experience seems to support it, but their visceral experience is shaped by situational ruts in software and display technology, in a context of no equalization of purpose, which allows inequities like viewing larger pixels with lower sensor area magnification at 100%, and sharpening both noise and fine details in denser sensors at levels that don't even get recorded at all with the less dense sensor. So not only would most focal-length-limited R7 vs R6 comparisons be done with the R7 magnifying the subject 2x as much, but pixel-level noise in the R6 is smeared as it is upsampled 200%, making it softer-looking, while the sharpening of R7 pixels at 100% make the noise dither deeper there, but at a level of detail unavailable with the larger pixels.

So, you really have a choice of what you want to do with the finer level of detail with the denser sensor; you can emphasize it if you need those extra details despite the noise, or you can process for weak pixel-level detail with the denser sensor, and just enjoy the higher sampling rate which improves subject color resolution and reduces aliasing, compared to larger pixels.

Many people are just letting the inequitable defaults happen though, in both processing and display, when they compare results, and this is often confounded further by the fact that people generally choose denser sensors when the subject is especially small and/or distant, and therefore their "real world experience" with denser sensors is going to be more diffraction and more noise and even more atmospheric problems at the subject level not because of the sensor chosen, but rather, the sensor is chosen for the inferior photographic situation which is already biased due to distance and/or a small subject, and atmospheric issues as well.
So you're saying that by increasing the pixel destiny we only win and get/retain more information. That'd mean larger pixels lose some information compared to the smaller pixels, even though they would capture the same amount of photons (considering there's no clipping).
Yes.

--
Beware of correct answers to wrong questions.
John
http://www.pbase.com/image/55384958.jpg
 
Last edited:
With the same duck photons, the duck photon noise will be the same, regardless of the number or size of pixels. More pixels will mean more pixel resolution, though.
Are you perhaps referring to the noise created by the detection ofight and conversion to an electrical signal? It is the electrical side where we should create a system boundary, light would be outside.
Nothing electrical was being discussed

If we place a boundary in the system. That boundary is where we stop discussing photons then that is indeed electrical. So I disagree.

. What exits the lens and hits the sensor are photons, and if we can't model for photons correctly regarding photographic and sensor parameters

Are sensor parameters not electrical? How are you modeling photons out of interest?

then we can't hope to model anything else correctly that builds upon that model. The conversion rate to electrons is fixed in each camera for the most part (some microlens exceptions at very low f-ratios are possible), but quantum efficiency is pretty much standard for the last decade and varies very little, compared to differences in sensor sizes and pixel sizes. All speculation about boundary effects is moot regarding total photons, when quantum efficiency is the same. If there were any edge effects that lost light, we would see it in the QE.
I'm not so sure but happy to continue.
As far as spatial displacement of photons due to edge effects at the photosites are concerned, that is only an issue with color channel crosstalk with very small pixels used in very-high-MP cellphone sensors, and is only a problem because classic single-pixel Bayer CFAs, but that is solved by "Quad Bayer", which makes a larger percentage of photons that pass through one filter but cross into the photowell of another cross into one with the same color filter. As far as spatial resolution is concerned, however, such as with a monochrome sensor, crosstalk is not as degrading to resolution as you would think, because the smaller the pixels, the less displacement there is when a photon crosses a boundary, since each pixel implies that it captured light in its center, and the center of smaller pixels are closer to neighbor pixels than with larger pixels.
So, going back what is this photo noise? That was the question.
"Tonal information" is just an abstraction derived from the empirical noise, and if you are thinking of "color tone information", then the color filter properties play a role, too. Nothing extra to see there. The Bayer CFA is a pain in the butt (another "situational rut") when trying to do spatial thought experiments, but let's say we had a Foveon-like sensor, but more ideal in color filtration; RGB each pixel. You don't lose any "tonal information" by having more resolution due to pixel density, because if you actually needed to know what larger pixels would do (and you don't need to know that, anyway, as filtration is an alternative to binning that maintains sampling quality), all you would have to do is add them to get a larger pixel. There is nothing purer to be had by capturing the photons in larger pixels in the first place.
I was following you until the last sentence. What is meant by purer here?
Purity here would be the idea that the same amount of photons (or photons-turned electrons) is better in one large photosite rather than spread over multiple smaller ones; Quarkcharmed's alleged superior "tonal information". The fact is, there is LESS information per unit of sensor area with larger pixels. Larger-pixel information can be created from smaller-pixel information, but the reverse is not possible.
I think the problem here is that the system being described keeps changing in favour of a wished for answer.

Personally I'm struggling to understand what your trying to convince yourself of. If we have a single photon at a time passed towards a sensor and the only thing we are interested in is if we detect a photon or not and have one detector Vs an arey of detectors it would stand to reason more information could be had by more sensors. Is this what your thinking?
If you drop what seems to be about a dozen marbles per container into a grid of square containers, and didn't notice that there was a shallow divider in the bottom of each container that created 4 "sub-boxes", would your information be ruined in any way, compared to not having those dividers? Think hard on that.
I think demanding people should think in certain ways is not a positive behaviour.

If I draw the boundary again that keeps the light, or marbles outside.
What does drawing it "again" mean? When did you first draw it? What did you draw it on? Boundary on the larger box or the smaller divider? Larger box in isolation or in a grid of them? People need to think about their statements and questions and make sure they are full qualified if we're ever going to have any good come out of these conversations before the thread hits 150 posts. I don't know what you're picturing in your head, and I am not telepathic. As a guess, I might think that you are talking about losing photons originally headed to a photosite to the outside, but the outside would be another pixel, and the loss goes both ways, for no net loss of light, and a very mild anti-alias filter is created by the crosstalk.
I see this bosey behaviour continues. I don't think it's helpful.
What I am interested in is the boxes themselves. I may have misunderstood but are you considering the walls themselves which I assume would not be part of the detection system? Am I on the same track as you?
There isn't as much of a "wall" issue as some people like to think, with the use of microlenses. Remember, we already know that there is actually a very small range of quantum efficiency with current APS-C and FF sensors, and the photon noise is actually modulated by the square root of that small difference. This is a much smaller difference than the claimed differences of very different pixel sizes.

If you are talking about "crosstalk", that does not seem to be an issue with R7-size pixels or larger, because if it was, the R7 would have a lot more chromatic noise or much weaker colors with the same exposure times area, but it does not. R7 color and chromatic noise at the pixel level at ISO 3200 are just as good as the R6 at ISO 12800, where the two cameras get the same amount of photons per pixel.
I'm asking yourself what you mean, I'm struggling to follow. I'm okay with what I think but I'm trying to get your story straight before engaging. Reading back through this thread the number of claims, absolute truth statements is vast.

What's driving all of this? To show folks the R7 doesn't have some possible deficiencies being suggested? Some comments I have seen from a user perspective is that it appears to be felt that using higher ISO settings isn't as successful with that camera as other cameras. Is it this you disagree with?
--
Beware of correct answers to wrong questions.
John
http://www.pbase.com/image/55384958.jpg
 
I am contemplating getting the R6II alongside my R7. I mainly do wildlife, birds, and macro, where I really want lots of pixels on subject, and the R7 is great. But in low light it is deficient, and being a crop sensor, doesn't have as nice of bokeh.
Both those complaints should be irrelevant in the type of work you say you are doing. Do you really think that the R7 sensor is deficient when focal-length-limited, or when stopping down for needed DOF? The R7 has more resolution and less noise than the R3 in those situations. Perhaps you are obsessing on 100% views of sharpened pixels?
I was walking around the zoo. There were indoor things (a platypus, some insects, some fish, etc) where flash was not allowed and there was poor light. Something with better low light performance would have been nicer here.
Probably, but that would really require a bigger lens pupil, with or without a larger sensor or larger pixels. If you actually needed DOF and had to stop down, then your photon noise is due to your DOF; not the sensor (for sensors with similar QE).
The R7 at ISO 6400 (I was shooting wide open at f/2.8) isn't pretty even with good postprocessing.
It's a matter of scale. Yes, if you made a large print of the same composition at the same ISO, you would have a less noisy image with the R6. If you ever thought that I would doubt that, then you clearly have not understood anything that I have said.

However, there is ZERO reason to assume the same ISO, as if the ISO were dictated directly by the lighting environment. The ISOs that you get will depend on the lenses available, and the shutter speeds chosen.

Part of that is because converters assume that most people are looking for pixel-level detail, all the time, even when there isn't sufficient light to magnify them cleanly, so converters sharpen quite a bit at the pixel level with the R7, sharpening a fine level of noise and detail that can't even be recorded at all with R6-size pixels, creating a deep dither effect with more salt and pepper pixels. This is done by software; not the camera!
I do mainly those types of photography but I do the occasional event, artistic, portrait, etc where the R6II would produce better images.
There is no reason to assume that the R6II has less visible noise per unit of sensor area than the R7, except that the sensor design is "newer", but the R3 is newer and has a little bit more visible noise per unit of sensor area than the R7 or R6.
By the way, the differences between these cameras in low light are not just "obsessive pixel peeping":
They are. The reason most people don't compare equitably is because they want to see everything the system captured, and not magnify the image any more than necessary to have each pixel represented on the screen (because doing so makes the view softer and nosier), and that is 100% or 1:1 pixel views, but that is a different magnification for sensors with different pixel densities so people are asking for a lot more from the R7 than from the R6 at 100%; not less, and not even the same, so there is more room for apparent failure. A better way to compare would be to look at the images at the same pixel ratio (both 100%, or both 200%) on separate monitors with the same proportion of pixel density as the sensors, and not sharpen in the conversion.
So the R6II would fill in the gaps. If I had to choose one it would be the R7. You have a similar choice to make.
Optically and noise-wise, a 24MP FF is only going to be superior in IQ when the FF has a larger-pupil lens or you get closer, and you must get shallower DOF with it, or you won't get more light.
The pixels on the R6II are about 4x the size of the pixels on the R7, leading to about two stops better noise performance at the same exposure settings.
No; that's not "noise performance"; it is pixel noise performance, and completely irrelevant when a different number of pixels are used to form a subject.
So I could have shot at ISO 12800 at twice the shutter speed and ended up with a nicer image, in the situation above.
Yes, you can get a less noisy image if you brought along a lens with a larger pupil at the needed angle of view (and you actually want the shallower DOF, or are willing to accept it to get less noise), but that is true for both cameras. A sensor without a lens can not do photography. The lens is what forms the image; the sensor simply decides what fraction of the image circle is captured, and finely it is divvied up.

There are certainly situations where FF can get you more subject/composition photons (usually fast, short focal-length primes), but for others, big pixels and big sensors have no noise benefit for actual photographic needs when shutter speeds needed do not allow base ISO. In any given light, with any given subject and shutter speed, the amount of photons captured from the subject depends only on pupil size and distance; it has nothing whatsoever to do with pixel sizes or sensor sizes. If one's evaluations go against that simple rule, then it might be an indicator that the evaluation occurred in a state of illusion. Yes, there can be differences between cameras in quantum efficiency and added electronic readout noise, but as I demonstrated in the Studio Comparison Tool link in another post, the net effect of QE and readout noise is the same for both the R7 and R6, with the same amount of photons.
 
Hello everyone,

I currently use a Nikon D500 with Tamron 150-600 G2 and have done for about 2 years. This was my first camera and I'm looking to upgrade. I have decided on either a Canon R5 or R6II with the RF100-500 lens. I'm also considering getting the 800 f11 or 1.4x TC.

I mainly shoot birds. Occasionally small mammals. However, birding is my primary hobby.

I cannot decide between the R5 and the R6II. I was hoping if I asked here people would help make the decision for me. Please note, where I am the price difference is minimal.

I am currently leaning towards the R5 simply because of the extra megapixels which will allow me to crop more. The birds are often quite far away.

For all other features the R6II is meant to be better (autofocus, low light, FPS, battery life, pre-burst shooting). However, I'm not sure if 24 MP will be enough if I need to crop.

Please could you give me your thoughts and recommendations on which camera to buy.

Thanks in advance.
For me, the answer is easy. I’m a birder too (love BIFs especially), and the R5 is IMHO the best choice, even with the R6ii in the mix now (R3 notwithstanding, but not in the running here).

The R5 simply ticks all of the right boxes for me. The resolution is a great match for the 100-500 (+/- 1.4x), the AF is just configurable enough (I like “Auto initial” for birding), the body is tough and better weather-sealed but not too heavy, the controls are excellent. I like being able to quickly switch Custom Shooting Modes by just pressing the “M-fn” button (ie switching from perched birds to BIFs instantly).

The IQ is so good that sometimes it feels like I can crop in forever.

I like shooting with the silent eShutter primarily (helps with tracking). Out of 100,000+ shots so far, I’ve only seen a handful adversely affected by rolling shutter effects. BTW, love the 20 fps (I keep my R6ii set to that fps too).

Interestingly enough, I have 4 close hiking/shooting buddies who are Nikon shooters (they all own both D500’s and D850’s). Two upgraded to the Z9 this year (but shhhh, IME the R5 is still the better birder ;-) ). Same goes for the R5 vs the R7 too. The R7 has too many “caveats” for me for birding (ie the rolling shutter and lesser weather-sealing esp), but I love it for macros. On a strict budget though, the R7 is really a great bargain as an all-round shooter.

The R6 (now the R6ii) is my general purpose camera (events, sports, etc). I know I’m going to like the extra AF configurability for those types of shooting. Bottom line though, the R5 + 100-500 + 1.4x is a match made in heaven. Simply can’t go wrong there.

R2
But why not the R7 which is superb for birding? AF is inherited from R3. You can't go better than that unless you are shooting regularly at ISO12800 or higher.

My R7 at 6400 ISO produces terrific results. Best of all I seldom have to crop.
The R5 is much more suitable for (my style of) birding. Biggies for me are the better weather sealing, reduced rolling shutter, larger buffer, better ergo (personal preference), and better AF response, esp in darker/reduced light. I also find BIFs to be easier to track with the R5’s EVF, though I haven’t nailed down exactly why yet.

I like the R7’s (and R6ii’s) greater AF configurability though, and will be upgrading the R5 to the Mark II when it comes out for sure. Right now the R7 is my dedicated macro body, and is doing great.

R2
Can't argue with R5's edge when it comes to the points you raised. And like you, I love using my R7 for macro! The reach on my 180mm f/3.5L gives me how much room for flexibility and creativity.
 
DP review is Comparing Exsposure latitude with E-shutter.

I wonder why R6mk2 is worse than other in the pic below.

Are they using 10 bit in ES on R6mk2?

dc0b194296324cf3be72332f0f7cf6bd.jpg.png
 
DP review is Comparing Exsposure latitude with E-shutter.

I wonder why R6mk2 is worse than other in the pic below.

Are they using 10 bit in ES on R6mk2?
Thanks for pointing out that DPR did the studio shots for the R6-II.

It looks like the R6-II might be using 12 bits. The R5 seems better than it is, though, because there is no high-frequency chroma in the R5 conversion; I'm not sure if that is the RAW cooking or the converter or both. As much as I used to examine the raw nethershadows of base ISO in older cameras, I haven't really explored that area on the R5. I did look at an ISO 100 blackframe, however, and it did not look like any blackframe I had ever seen; it was smoothed and posterized.


--
Beware of correct answers to wrong questions.
John
 
DP review is Comparing Exsposure latitude with E-shutter.

I wonder why R6mk2 is worse than other in the pic below.

Are they using 10 bit in ES on R6mk2?
Thanks for pointing out that DPR did the studio shots for the R6-II.
I just looked at those cameras in the Tool at ISO 102K, and the R6-II noise is in fact a little bit more than the R6, and a little less than the R5.

So, no new, cutting-edge high ISO performance.
 

Keyboard shortcuts

Back
Top