Up Next

How Breasts Became Boobs

This software lets other people control your face

Have you ever wondered what someone else’s smirk would look like on your face? A group of researchers have developed software that now makes that possible.

face4

 

Researchers created computer models that use sensors and cameras to map two people’s faces, including where the lips are in relation to the eyes, eyebrows and nose. They then use these coordinates to mold expressions from the “source” face to the “target” face, all in real time. The expressions aren’t identical, but they’re close enough. In the video, you can see one person sitting perfectly still, almost expressionless, while the other raises their eyebrows, opens their mouth, smiles, winks, grimaces, etc. Those facial movements are instantly transferred to the digital avatar of the other person.

face3

To make the whole thing seem more realistic, they also add teeth to the puppet face. The result is an e-face that looks pretty real and can be manipulated at will.

“The real-time capability of our approach paves the way for a variety of new applications that were previously impossible,” says one of the researchers in the video. According to the researchers, this could come in handy in real-time video translation, so that the mouth of the speaker matches the facial expressions of the translator. If you’ve ever watched a dubbed film, you can appreciate this. That disconnect between what you hear and what the shape of the speaker’s mouth looks like is annoying.

The system, which is described in a new paper, could also be quite useful for animating characters in computer-generated films. Studios, like Disney, are working on similar facial tracking technologies, but these have focused mostly on re-targeting, recreating the wrinkles and lines that make a fleshy face unique in a corresponding digital avatar.

This research goes a step further by essentially meshing two different faces into one. This so-called facial re-enactment is more challenging because “even the slightest errors in transferred expressions and appearance and slight inconsistencies with the surrounding video will be noticed by a human user,” the scientists write in their new paper. Others have tried doing it before with some success, but not in real-time.

It could make characters in virtual reality games or simulations appear much more realistic. They might even wear your friends’ actual expressions while playing.

This software lets other people control your face

Have you ever wondered what someone else’s smirk would look like on your face? A group of researchers have developed software that now makes that possible.

face4

 

Researchers created computer models that use sensors and cameras to map two people’s faces, including where the lips are in relation to the eyes, eyebrows and nose. They then use these coordinates to mold expressions from the “source” face to the “target” face, all in real time. The expressions aren’t identical, but they’re close enough. In the video, you can see one person sitting perfectly still, almost expressionless, while the other raises their eyebrows, opens their mouth, smiles, winks, grimaces, etc. Those facial movements are instantly transferred to the digital avatar of the other person.

face3

To make the whole thing seem more realistic, they also add teeth to the puppet face. The result is an e-face that looks pretty real and can be manipulated at will.

“The real-time capability of our approach paves the way for a variety of new applications that were previously impossible,” says one of the researchers in the video. According to the researchers, this could come in handy in real-time video translation, so that the mouth of the speaker matches the facial expressions of the translator. If you’ve ever watched a dubbed film, you can appreciate this. That disconnect between what you hear and what the shape of the speaker’s mouth looks like is annoying.

The system, which is described in a new paper, could also be quite useful for animating characters in computer-generated films. Studios, like Disney, are working on similar facial tracking technologies, but these have focused mostly on re-targeting, recreating the wrinkles and lines that make a fleshy face unique in a corresponding digital avatar.

This research goes a step further by essentially meshing two different faces into one. This so-called facial re-enactment is more challenging because “even the slightest errors in transferred expressions and appearance and slight inconsistencies with the surrounding video will be noticed by a human user,” the scientists write in their new paper. Others have tried doing it before with some success, but not in real-time.

It could make characters in virtual reality games or simulations appear much more realistic. They might even wear your friends’ actual expressions while playing.

WHERE TO WATCH