AI turns blurry, unrecognizable face pictures into the perfect selfie that is times 60 sharper 

AI turns blurry, unrecognizable face pictures into the perfect selfie that is more than 60 times sharper

  • Duke University created an AI-powered tool makes blurry images sharper
  • The system searches through AI-generated faces to find one that looks similar
  • Then combines the low-resolution image with the high-resolution picture 
  • The system can convert a 16×16-pixel image of a face to 1024 x 1024 pixels 

Researchers have designed an AI that can transform any blurry portrait into the perfect selfie.

The method, called PULSE, searches through AI-generated examples of high-resolution faces to match ones that look similar to the input image when compressed to the same size.

The team uses a tool in machine learning with two neural networks – one develops the AI-create human faces that mimics the ones it was trained on and the other takes this output and decides if it is convincing enough to be mistaken for the real thing

The system can convert a 16×16-pixel image of a face to 1024 x 1024 pixels in a few seconds, which is 64 times the resolution.

Scroll down for video 

The method, called PULSE, searches through AI-generated examples of high-resolution faces to match ones that look similar to the input image when compressed to the same size.

Duke University computer scientist Cynthia Rudin, who led the team, said: ‘Never have super-resolution images been created at this resolution before with this much detail.’

The system is capable of creating numerous pixels to design realistic-looking faces with imaging features such as fine lines, eyelashes and stubble that weren’t there in the first place.

However, scientists note that the technology is not able to identify people – ‘ It won’t turn an out-of-focus, unrecognizable photo from a security camera into a crystal clear image of a real person,’ says the researchers.

What it actually does is generates new faces that do not exist, but look as if they do.

The traditional approach guesses what extra pixels are missing by attempting to match them with corresponding pixels in high-resolution images the computer has seen before

The traditional approach guesses what extra pixels are missing by attempting to match them with corresponding pixels in high-resolution images the computer has seen before. 

This method produces textured areas in hair and skin that do not fit with all of the pixels, which creates fuzzy patches in the image.

Duke’s approach  searches through the AI-generated images to fine one that looks similar to the low-resolution picture when it is shrunk down to size.

This method produces textured areas in hair and skin that do not fit with all of the pixels, which creates fuzzy patches in the image. Duke's approach searches through the AI-generated images to fine one that looks similar to the low-resolution picture when it is shrunk down to size

This method produces textured areas in hair and skin that do not fit with all of the pixels, which creates fuzzy patches in the image. Duke’s approach searches through the AI-generated images to fine one that looks similar to the low-resolution picture when it is shrunk down to size

The team used a tool in machine learning called a 'generative adversarial network,' or GAN, which are two neural networks trained on the same data set of photos. One network comes up with AI-created human faces that mimic the ones it was trained on, while the other takes this output and decides if it is convincing enough to be mistaken for the real thing

The team used a tool in machine learning called a ‘generative adversarial network,’ or GAN, which are two neural networks trained on the same data set of photos. One network comes up with AI-created human faces that mimic the ones it was trained on, while the other takes this output and decides if it is convincing enough to be mistaken for the real thing

The team used a tool in machine learning called a ‘generative adversarial network,’ or GAN, which are two neural networks trained on the same data set of photos. 

One network comes up with AI-created human faces that mimic the ones it was trained on, while the other takes this output and decides if it is convincing enough to be mistaken for the real thing. 

And the first network improves through the experience until the second one is unable to tell the difference.  

Even given pixelated photos where the eyes and mouth are barely recognizable, ‘our algorithm still manages to do something with it, which is something that traditional approaches can’t do,’ said co-author Alex Damian, a Duke math major. 

The researchers asked 40 people to rate 1,440 images generated by PULSE and five other scaling methods on a scale of one to five, and PULSE did the best, scoring almost as high as high-quality photos of actual people.