Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Lol! Sony cameras should have more auto settings:I heard Sony has great AF, but currently ear-detection is apparently switched on. I searched in the AF menu under:
Focus->Auto->Objects->NonLiving->Statues->Dogs but could not find an option for ear detection, does anyone know where to find it???
Such fast moving subjects really test the AF ..... LOL ;-)
Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
Honestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
What you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
I think if a plane crashes into the scene with your dog then you’ve probably got bigger problems than what the camera is tracking…So now I am really curious about the new Auto Recognition modus in the newly released firmware of my A7c II. In the case I am photographing a dog at f/2 and a plane crashes into the scene, will I get a tack sharp picture of the plane and the dog is blurred, if I am in auto-recognition? I will report back!
Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
Is it possible to have bigger problems than what a camera's autofocus tracks?I think if a plane crashes into the scene with your dog then you’ve probably got bigger problems than what the camera is tracking…So now I am really curious about the new Auto Recognition modus in the newly released firmware of my A7c II. In the case I am photographing a dog at f/2 and a plane crashes into the scene, will I get a tack sharp picture of the plane and the dog is blurred, if I am in auto-recognition? I will report back!
I wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
The logic now is pre wired but essentially has just become faster as the camera looks up results
the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations
what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects
Dont want to go too off topic but the eats instead of eyes is an issue also on current range
The ear issue happens with the Animal setting. According to sony documentationI wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
The logic now is pre wired but essentially has just become faster as the camera looks up results
the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations
what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects
Dont want to go too off topic but the eats instead of eyes is an issue also on current range
Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.
Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
Animal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes. And hence my suspicion it's not doing any semantic filtering like recognising a face first then the eye.The ear issue happens with the Animal setting. According to sony documentationI wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
The logic now is pre wired but essentially has just become faster as the camera looks up results
the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations
what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects
Dont want to go too off topic but the eats instead of eyes is an issue also on current range
Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.
Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html
When shooting in dark places or animals with dark hair
the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark
the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all
When the cat is near the A1 and the A7C II detect the eyes correctly
I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both
Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle
In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example
The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
Or it simply looks for heads and is not even as complex as they make it lookAnimal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes.The ear issue happens with the Animal setting. According to sony documentationI wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
The logic now is pre wired but essentially has just become faster as the camera looks up results
the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations
what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects
Dont want to go too off topic but the eats instead of eyes is an issue also on current range
Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.
Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html
When shooting in dark places or animals with dark hair
the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark
the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all
When the cat is near the A1 and the A7C II detect the eyes correctly
I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both
Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle
In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example
The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
-shrug- I dunno what their training data set was, so no comment.Or it simply looks for heads and is not even as complex as they make it lookAnimal setting surprisingly works for kangaroos, wombats and Tasmanian devils for me. Actually works very well on the A7C, A7IV and A1. I was convinced it was just a glorified pet mode too until it recognised their eyes.The ear issue happens with the Animal setting. According to sony documentationI wasn't talking about the difference between the A7C and A7CII, I know its another inference model they're running, the main difference is the availability of more complex models powered by a dedicated chip. I was talking about your description of a lookup catalog.Sorry but thats incorrect machine learning models are not very dissimilar to previous implementations a nd human face detection was already able to discern all of the parts of a face before however the processing would be done from scratch in camera with a decision tree this is heavy and therefore older models did just humans as computations become complexWhat you just described is actually how pre-AI object recognition worked. Literally pairing up images and then doing a numerical analysis on how close it is to said picture. The AI doesn't have a notion of a database of what an eye looks like - instead it's a pattern. This reason is why the AI model doesn't necessarily get bigger the more images you feed it.Easy to blame the technology this is a deep learning model that has been fed with images and told which one was and which not then what it does is to look up some shapes on the catalog and come up with yes noHonestly every time something like this happens I get less worries about AI’s ability to take my job!I guess the ear instead of an eye is a retained feature on the advanced AI models as my A7C II detects it more easily than my A1Yes, it has animal eye AF, just not all the fancy modes for different types of animal or the AI stuff.Happens also with cats and other animals
This is a new issue of the 'AI' model it does not happen on my A1 just on the A7C II but you have the A7C which I thought would not even do animals?
I don't Sony subject detection is very smart but their autofocus is good so once it locks on the ear you get a very sharp ear shot...
I don’t shoot many statues or dogs, so haven’t seen this behaviour on my A7C.
The AI model (at least in pre-A7RV days) has no notion of what the head is, or that an eye should belong where the head is for anything besides humans. It's why there's a lot more false positives with the older way of doing things - there's no semantical rules.as ears on cats and dogs are on the top of the head and eyes in the middle this means this algorithm is not astute and therefore it looks on the most prominent circular shapes which in animals can be inner ear
In a way I'd actual prefer that there be more discrete selection of what animals, and tied together with a generalised model that can automatically determine what the subject actually is first then selecting the best model. The A1II's way is paving the way for such a method.
Or that the AI model thinks that region is more likely that it's an eye than the actual eye itself.if it checked that the eyes cannot be the highest point of the head it would not come as false positive but the routine is keen to lock early so as soon as it sees two dark holes it calls it success
The logic now is pre wired but essentially has just become faster as the camera looks up results
the two cameras a7c and a7c II make the same wrong call but the A7C Ii makes that call quicker and with a smaller subject accuracy wise I can see there is no difference between the camera generations
what the new ones do is to detect an object shape partially obscured however in terms of locking on the eyes the accuracy seems identical I have ran several scenarios to compare my A1 and my A7C II and indeed the A7C II gives positives at distance when you select head and body however if you choose eye only it has only in some cases a marginal benefit in how far away it recognises and mostly on static subjects
Dont want to go too off topic but the eats instead of eyes is an issue also on current range
Either way, it's a minor nitpick, don't wanna derail this thread so I won't go further.
Shame to hear that about the new AI models - I don't have an issue with ears, but with birds sometimes the dark spot behind a bird's actual eyes are misidentified.
https://support.d-imaging.sony.co.jp/support/ilc/autofocus/ilce1/en/animaleyeaf.html
When shooting in dark places or animals with dark hair
the camera would have a challenge. My cat has grey fur which I would not sure if classifies as dark
the A7C II at distance detects the ear as eyes in broad daylight. The A1 detects nothing at all
When the cat is near the A1 and the A7C II detect the eyes correctly
I have not tried with a statue as the example here however my dark hamster that is dark and you could argue does not have a dog or cat face is detected by the camera but it goes on the ears with both
Birds have more complicated cases it looks like the camera only detects certain type of birds shape and if the beak is different it would struggle
In general the camera does not really recognise animals it really recognises cats and dogs. When you have an elephant or a bear standing on two feet it will struggle as per their example
The animal setting really is a pet setting, lion and tigers look like cats or it would not easily recognise them either
Their text could simply mean we trained our model or wrote our algorithm using dogs and cats if you are lucky it works if you are not it won't
That's what I've been saying, there's no semantic filtering. It's just 'eye or not' from how it behaves. The newer AI one, I know there's a bit more semantic segmentation going on on account of it actually doing full body plus eyes.Perhaps when you set it to animal it does no even bother checking for legs otherwise it would not recognise cat standing as the legs are not visibile
The basics everyone knows, it's the trade secret stuff that you want to know (and obviously won't know). Provided they dogfood, you can actually see their own classifier software: https://www.aitrios.sony-semicon.com/news/ai-camera-compatible-with-local-studioIf Sony or others clarified how they write their sofware I am sure we would all be suprised on how simple the assumptions and models are compared to our expectations