5 Most damaging Deepfakes of recent times – Dangers of AI
We’re living in an era where people can make anyone say anything at any point in time. Everyone is already familiar with Artificial Intelligence – the reproduction, simulation of human intelligence by machines. Now, we are facing a new type of artificial intelligence that is rising concerns due to its dangerous nature. It’s about deepfakes or videos created with AI that can make someone look like they said or did something that they have never done.
It’s not clear when deepfakes were invented, but this trend gained attention in 2018 when fake videos involving celebrities were published on Reddit. The deepfakes are raising questions on how easily a video can be changed and replaced with other people’s faces. They can affect media, politics, education, business, and any type of human activity out there. Here are the 5 most damaging deepfakes examples of 2019.
“By far the greatest danger of AI is that people conclude too early that they understand it.” – Eliezer Yudkowsky
1. Mark Zuckerberg speaks frankly
Last year, a deepfake video of the Facebook CEO Mark Zuckerberg was released on Instagram. In the video, Zuckerberg says “Imagine this for a second – One man with total control of billions of people’s stolen data. All their secrets, their lives, their futures. I owe it all to Spectre. Spectre showed me that whoever controls the data, controls the future.”
When people first saw this video, they thought that it’s really Zuckerberg because the video looks 100% real – the Zuckerberg’s voice and the way his mouth moves are totally convincing. This deepfake shows how modern technology can easily be used to make it look like someone has said something. In other words, this is “putting words in anybody’s mouth” and can be very damaging.
2. Gal Gadot performing in X-rated films
One of the most-damaging deepfake videos of 2019 is the video of Gal Gadot performing in a porn scene. This video is created using a combination of open-source AI software like Google’s TensorFlow. Not only Gal Gadot, but even other celebrity deepfake porn videos were also created using AI, that includes Taylor Swift, Scarlett Jhonson and Maisie Williams.
The video was watched by many users, and if we see carefully we’ll find the jerks and tracking issues on the face, but this is enough to fool people who’re unaware of deepfakes. Being face-swapped is scary and dangerous especially if it’s damaging your status and reputation in front of the world. Deepfakes are a type of fake videos spreading fake news that can cause serious problems in society, according to many scientific studies.
“The potential benefits of AI are huge, so are the threats.” – Dave Waters
3. Trump deepfake incident
The current President of the United States Donald Trump has also been one of the victims of the deepfake technology. His face was included in a parody called “Better Call Trump: Money Laundering 101”. This video is made from a scene from the popular TV show “Breaking Bad”, and introduces Trump as Saul Goodman. Here, Trump explains to Jesse Pinkman the basics of money laundering.
Moreover, there are a lot of other videos of Trump that show how he says different things about politics. For example, there is a video where he says: “We will get what we want, one way or another, whether it’s through you, through military, through anything you wanna call”. There is also another one where Trump says: “So I think that I could have a very good relationship with Russia, and with President Putin, and if I did that would be a great thing”. Trump has never said any of this. It’s all deepfakes, and it’s really dangerous. This is proof of how advanced technology has become and how a fake video can cause problems between politicians.
4. Obama’s public announcement
Most of the victims of deepfake technology are politicians. Last year, a very convincing video with the former President Barack Obama was released, where you can see that the potential for harm by deepfakes is real and alarming. This technology can potentially be used to damage someone’s reputation by releasing such videos.
You’ve probably seen the video where Barack Obama said “We’re entering an era in which our enemies can make it look like anyone is saying anything, at any point in time. Even if they would never say those things.”. Even if it looks and sounds like Obama, he never said those things. This is just another damaging deepfake made to warn the public about the misinformation that can be found online.
“In the long term, AI is going to be taking over so much of what gives humans a feeling of purpose.” – Matt Bellamy
5. Ali Bongo’s deepfake speech”
This is one of the examples of how deepfakes can be used to spread false information in politics, among citizens and manipulate elections by fooling people. Take Ali Bongo, the president of Gabon. He’s been abroad for medical treatment and he had not been seen in public for months. To convince the public, the government released a video of Ali Bongo’s speech.
The video was not convincing, Ali’s eye blinking and moments weren’t realistic. It was a deepfake video to convince the public that he’s health was normal and he could take up political activities. This fake video led Gabon’s military to attempt a coup and also disturbed the public. This shows how deepfakes can be used to disrupt politics and also destabilize the whole country
The manipulation of digital video and images is not news, but the advancements in artificial intelligence and machine learning can be dangerous and alarming. Modern technology is getting more and more accessible to people, and its application is expanding in threatening situations, such as the deepfakes. The concern about the negative impact that deepfakes can have on people, politics, business, etc. is growing all over the world.
In conclusion, deepfakes have more negative effects than positive ones. The governments worldwide are looking for solutions to detect such fake videos and to eliminate them. Of course, one of the best ways to prevent deepfakes is to make people aware of them through digital education, and teaching children at school to spot fake news.
Monica Ross is a content writer and manager at Devathon – One of the leading web and app development company. She writes mainly on web development, mobile app development, AI and technologies around them.