How do I switch off "Ear Detection"?

Bas Hamstra

Senior Member
Messages
3,267
Solutions
1
Reaction score
2,222
Location
Haren, NL
I heard Sony has great AF, but currently ear-detection is apparently switched on. I searched in the AF menu under:

Focus->Auto->Objects->NonLiving->Statues->Dogs but could not find an option for ear detection, does anyone know where to find it???



de3dc8a6f7b34b4f92abb48172c043c8.jpg



--
Bas
 
I heard Sony has great AF, but currently ear-detection is apparently switched on. I searched in the AF menu under:

Focus->Auto->Objects->NonLiving->Statues->Dogs but could not find an option for ear detection, does anyone know where to find it???
Lol! Sony cameras should have more auto settings:
  1. Auto avoid user error
  2. Auto reset forgotten destructive settings (ear focus included)
  3. Idiot warning flashing in EVF/on screen when composition deviates from The Golden Rule
 
Last edited:
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?

--
If you like my image I would appreciate if you follow me on social media
instagram http://instagram.com/interceptor121
My flickr sets http://www.flickr.com/photos/interceptor121/
Youtube channel http://www.youtube.com/interceptor121
Underwater Photo and Video Blog http://interceptor121.com
If you want to get in touch don't send me a PM rather contact me directly at my website/social media
 
Last edited:
I heard Sony has great AF, but currently ear-detection is apparently switched on. I searched in the AF menu under:

Focus->Auto->Objects->NonLiving->Statues->Dogs but could not find an option for ear detection, does anyone know where to find it???

de3dc8a6f7b34b4f92abb48172c043c8.jpg
Such fast moving subjects really test the AF ..... LOL ;-)

--
Follow: https://www.instagram.com/ray_burnimage/
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no

as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear

if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success

For reference my Panasonic camera doesnt do that it detects a bit less early but much more accurate unfortunately panasonic aurofocus is slow so moving targets are not an option
 
I would like to have a your forgot to press record video in the complete-idiot submenu please.
 
  • Like
Reactions: Lan
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
 
Last edited:
Real answer: turn off Face/Eye Priority in AF. Should be in Camera Settings 1 → Face/Eye AF Set if you're on the A7C, or just turn off subject recognition completely under Focus → Subject Recognition → Subject Recog in AF → On for the AI-enabled cameras.
 
Last edited:
So now I am really curious about the new Auto Recognition modus in the newly released firmware of my A7c II. In the case I am photographing a dog at f/2 and a plane crashes into the scene, will I get a tack sharp picture of the plane and the dog is blurred, if I am in auto-recognition? I will report back!
 
So now I am really curious about the new Auto Recognition modus in the newly released firmware of my A7c II. In the case I am photographing a dog at f/2 and a plane crashes into the scene, will I get a tack sharp picture of the plane and the dog is blurred, if I am in auto-recognition? I will report back!
I think if a plane crashes into the scene with your dog then you’ve probably got bigger problems than what the camera is tracking…
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
 
So now I am really curious about the new Auto Recognition modus in the newly released firmware of my A7c II. In the case I am photographing a dog at f/2 and a plane crashes into the scene, will I get a tack sharp picture of the plane and the dog is blurred, if I am in auto-recognition? I will report back!
I think if a plane crashes into the scene with your dog then you’ve probably got bigger problems than what the camera is tracking…
Is it possible to have bigger problems than what a camera's autofocus tracks?

Do we now need to reassess what separates the professional photographer from the amateur?
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.

Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.

Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
 
Last edited:
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.

Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.

Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
The ear issue happens with the Animal setting. According to sony documentation


When shooting in dark places or animals with dark hair

the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark

the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all

When the cat is near the A1 and the A7C II detect the eyes correctly

I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both

Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle

In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example

The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.

Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.

Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
The ear issue happens with the Animal setting. According to sony documentation

https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html

When shooting in dark places or animals with dark hair

the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark

the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all

When the cat is near the A1 and the A7C II detect the eyes correctly

I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both

Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle

In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example

The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
Animal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes. And hence my suspicion it's not doing any semantic filtering like recognising a face first then the eye.

FWIW if I stick my A1 at OP's picture, it doesn't pick up anything.
 
Last edited:
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.

Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.

Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
The ear issue happens with the Animal setting. According to sony documentation

https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html

When shooting in dark places or animals with dark hair

the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark

the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all

When the cat is near the A1 and the A7C II detect the eyes correctly

I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both

Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle

In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example

The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
Animal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes.
Or it simply looks for heads and is not even as complex as they make it look

Their text could simply mean we trained our model or wrote our algorithm using dogs and cats if you are lucky it works if you are not it won't

Perhaps when you set it to animal it does no even bother checking for legs otherwise it would not recognise cat standing as the legs are not visibile

If Sony or others clarified how they write their sofware I am sure we would all be suprised on how simple the assumptions and models are compared to our expectations
 
Happens also with cats and other animals

This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1

I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Honestly every time something like this happens I get less worries about AI’s ability to take my job!

I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes no
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.
as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.

In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complex

The logic now is pre wired but essentially has just become faster as the camera looks up results

the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations

what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects

Dont want to go too off topic but the eats instead of eyes is an issue also on current range
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.

Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.

Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
The ear issue happens with the Animal setting. According to sony documentation

https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html

When shooting in dark places or animals with dark hair

the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark

the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all

When the cat is near the A1 and the A7C II detect the eyes correctly

I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both

Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle

In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example

The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
Animal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes.
Or it simply looks for heads and is not even as complex as they make it look

Their text could simply mean we trained our model or wrote our algorithm using dogs and cats if you are lucky it works if you are not it won't
-shrug- I dunno what their training data set was, so no comment.
Perhaps when you set it to animal it does no even bother checking for legs otherwise it would not recognise cat standing as the legs are not visibile
That's what I've been saying, there's no semantic filtering. It's just 'eye or not' from how it behaves. The newer AI one, I know there's a bit more semantic segmentation going on on account of it actually doing full body plus eyes.
If Sony or others clarified how they write their sofware I am sure we would all be suprised on how simple the assumptions and models are compared to our expectations
The basics everyone knows, it's the trade secret stuff that you want to know (and obviously won't know). Provided they dogfood, you can actually see their own classifier software: https://www.aitrios.sony-semicon.com/news/ai-camera-compatible-with-local-studio
 

Keyboard shortcuts

Back
Top