Researchers with the Samsung AI Center in Moscow and the Skolkovo Institute of Science and Technology have published a new paper detailing the creation of software that generates 3D animated heads from a single still image. Unlike previously detailed AI systems capable of generating photo-realistic portraits, the new technology produces moving, talking heads that, though not perfect, are highly realistic.

'Practical scenarios' require a system that can be trained using only a few—or even a single —of a person rather than an extensive image dataset, the newly published study explains. To satisfy this requirement, researchers created a system for which 'training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters.'

Using generative adversarial networks, researchers were able to animate painted portraits in addition to images, producing, among other things, a talking, moving version of the Mona Lisa. As demonstrated in a video detailing the study (below), final results vary in quality and realism, with some being arguably indistinguishable (at least at low resolutions) from real videos.

The researchers explain in their paper that the use of additional images to train the system results in life-like final results:

Crucially, only a handful of photographs (as little as one) is needed to create a new model, whereas the model trained on 32 images achieves perfect realism and personalization score in our user study (for 224p static images).

Some other issues remain with this type of system, the researchers note, including a 'noticeable personality mismatch' between the person featured in the still image(s) and the talking individual used to animate the portrait. The researchers explain, 'if one wants to create "fake" puppeteering videos without such mismatch, some landmark adaptation is needed.'

The technology remains viable for purposes that don't necessarily require a personality match, but rather the simple animation of a character that exists only as a small series of still images. Thus far, the technology only works on faces and the upper parts of one's torso. It's unclear whether the researchers plan to expand the system to include other body parts.

Samsung's study joins past AI-based portrait work from NVIDIA, as well as non-portrait AI image generation, including the system NVIDIA debuted earlier this year --- one capable of rapidly converting simple sketches into complex landscape images.