Entering the future of deep fake news.

Entering the future of deep fake news.
2019-04-08 Paliscope

Entering the future of deep fake news.

With more intelligence floating around online than ever before, the spread of disinformation is increasingly common. The fake news phenomenon is nothing new, however. Disinformation has been employed for centuries to create chaos and divide people. The biggest difference today is technology.

Among the latest technology being used to spread disinformation is deepfakes. Given the high risk that they will affect social systems and democracy as a whole, deepfakes have become the center of many disinformation debates.

Simply explained, deepfakes are misleading images and videos created by manipulating visual content using deep learning technology, e.g., by swapping faces and creating fake people, yielding a result that looks strikingly real. With this technology you can make a person act or say just about anything. Question is, what can we do to counter the misusage and spread of deepfakes?   

As with all fake news problems, there is no silver-bullet solution to deepfakes. But the key is to find ways to verify whether the information in front of you is real or not. In order to do that, you must collect more information that complements the original source. This is where open source intelligence plays a vital part.

Look deeper (context is key)

Open source intelligence, i.e., information collected from public sources on the internet, can be used to verify the credibility of images, videos, and other sources. Using open source techniques, it’s possible to establish the veracity of a piece of information by providing a context for it. Because with deepfakes, it’s impossible to tell if an image or video is real or not simply by looking at it. You have to go deeper.

Intelligence that indicates whether or not a piece of information has been manipulated could include information about when an image or video was taken, where it was taken, and if it correlates to a specific event. Decoding a fake could also involve taking a closer look at the file information. If the EXIF data has been stripped or changed in any way, that is an indication that someone wants to hide something and, therefore, that the file might be manipulated.

Use credible sources

The most important thing is to verify against credible sources. At Paliscope, we have started to build a platform around that very idea. Our software is used to collect intelligence online, and by integrating third-party services which work with different types of detection online, it is now possible to automatically match information against credible sources and information databases to find out if there’s more to an image or video or not.

For example, it’s possible to search and match against specific usernames on the Darknet, use face recognition technology to discover more photos of a specific person online, and to find similar images based on EXIF data such as location and camera serial number. Our hope is that using these resources will help investigators find more intelligence online, for example, when trying to tell a real from a fake.

Connect the dots

Even though the growing field of innovation around spotting deepfakes is tremendous, we still have a long way to go. The problem is that as soon as someone creates technology that can spot fakes, someone else creates even better fakes. And with advances in artificial intelligence, fakes will only become more sophisticated and realistic over time.

For this reason, it becomes even more critical to take matters into your own hands and conduct a simple search of the information existing behind and around an image or video. With that intelligence you can create a more comprehensive picture and, ultimately, discern what is fake and prove what is real.