You are here

Fake videos and artificial intelligence

By Jean-Claude Elias - Aug 08,2019 - Last updated at Aug 08,2019

Fake news is now beyond simple words and text. Deceptive, but advanced, very convincing fake video is the newest trend.

Hazards and threats on the Internet evolve as the technology itself does — for better or for worse. If the well-known, infamous hacking, data theft, viruses, phishing e-mails and Trojan horses are still here, their incidence has somewhat abated. This is because of increasing public awareness on the one hand, and thanks to the various methods available to get effective protection against them on the other.

The real annoyance — to put it mildly — now consists in the smart manipulation of images, still photos but mostly videos, using advanced techniques based on the latest artificial intelligence (AI) methods and software algorithms. It lets you create fake photos or videos that look very real. This reaches far beyond simple Photoshop editing or retouching.

The technique is often used to create fake videos of celebrities and politicians with the intention of damaging their reputation, making fun of them, hurting or humiliating them. The extent of the damage can be quite large sometimes.

Already plagued by the fake news phenomenon, often based on words, the media world now has to face incredibly realistic fake videos and images. The advanced software technique is called “deepfake”, a term that was adopted by the IT community less than two years ago. FakeApp and the open-source DeepFaceLab are two trade names of software that can generate entirely made-up, perfectly believable, stunning videos.

Among the possibilities of the deepfake technology, AI applied to videos lets the author, the “movie director”, not only put someone’s head on someone else’s body, with the consequences one can imagine, but also put, in the mouth of the character, words that were never said in the first place.

From Barack Obama and other leaders in the world of politics, to Taylor Swift in the pop music domain, many are those who have been victims of deceptive fake videos generated using the above described technique. Deepfake is also used on not-necessarily-famous people, to harm them on social networks, in personal, mean attacks.

The Swiss daily Aargauer Zeitung last year wrote “the manipulation of images and videos using artificial intelligence could become a dangerous mass phenomenon”.

Theverge.com says: “AI deepfakes are now as simple as typing whatever you want your subject to say. A scarily simple way to create fake videos and misinformation.”

The phenomenon is all the more complex when fake videos are sometimes created just for fun, for an honest laugh, with no intention of harming anyone. How then do you tell the difference, how can you predict what the intention of the video maker was in the first place?

The good news is that IT professionals have the necessary technical tools and skills to tell if a video was created using such AI-based software. To maintain a reasonable level of ethics on their network, social media like Twitter and Facebook, among others, are blocking videos created this way. That is when they can spot them in time. Otherwise they are removing them once detected.

Deepfake technology is simply making it more difficult than ever to tell what is true from what is made up in the digital world. At the same time, it is hard not to acknowledge the extraordinary power of AI and all it lets you do.

up
11 users have voted.
PDF