Is seeing really believing?
Ever heard of DeepFake?
The Internet’s first taste of the evolving Artificial Intelligence known as ‘DeepFake’ has sparked widespread interest, becoming increasingly sophisticated. It’s got the potential to be both super cool and super scary.
Is deepfake really that scary?
Some might say no. Deep learning technology has already started to transform the filmmaking industry, seriously cutting down editing time and enhancing special effects. Creators on Youtube have shown advancements of this technology through comparing the more traditional effects of CGI against deepfakes in Hollywood blockbusters from Luke Skywalker in Mandalorian to Princess Leia in Rogue One.
in 2021, there is no need for Hollywood’s million dollar equipment or any technical expertise as iPhone apps now have the ability to create images and videos that are near-impossible to tell from real faces.
The application Wombo turns photos into bizarre, hilarious and slightly disturbing lip-syncing videos, generating 100 million clips in its first two weeks as copies of Elon Musk and Vladimir Putin singing ‘We’re not gonna take it’ and ‘Lose Yourself’ sent the internet into hysterics. Although, my grandparents think it’s ‘weird’.
Even better, there’s no need for singing lessons!
While it’s insanely fun to lip-sync to meme-related songs that are ready to go viral, the rise of this synthetic media makes it possible for people to create fake videos of people doing and saying things that have not done or said. Concerning right?
It is not uncommon for companies that produce deepfake apps in the past to be scrutinised for privacy violations, although Wombo CEO, Ben-Zion Benkhin promises that all “facial feature data” is “deleted immediately after”. But where is this data really being deleted from – and is there really a final resting place for data?
These underlying issues of deepfake applications saving data records continue today. Due to the nature of the content produced by these applications, consideration of data protection rules becomes irrelevant because the content technically does not belong to a real individual. Mind blowing stuff.
The development of deep learning methods referred to as generative adversarial networks (GANs) has given people the opportunity to exploit identities, from nonconsensual pornography of female celebrities (accounting for 96% of deepfakes on the internet – note this link is safe! We promise.) to false videos of politicians running for reelection. Does this pose a danger to democracy?
In 2018, this was displayed when the president of Gabon, Ali Bongo, was the centre of global suspicion when critics accused footage of a New Years Day address as being ‘deepfake’. Prior to the video being released, the president had been receiving medical treatment in Saudi Arabia and London with a lack of information regarding his wellbeing.
It is still unknown whether the video was deepfake, however Ben Moubamba, a Gabonese politician who had ran against Bongo in the previous election, pointed out that several parts of Bongo’s face were “immobile” and that his eyes move “completely out of sync with the movements of his jaw.”
In early 2020, social media platforms including Facebook and Twitter banned deepfakes from their networks. But in all practicality, the borderless nature of social media may prove this impractical. As improvements are constant, can this tech ever truly be monitored?
Possibly the most dangerous part about this technology is the ability of liars to lie (something liars have a tendency to do), known as the liars dividend. The plausible deniability of this tech can provide greedy politicians or your everyday scumbag to emphasise the effects of misinformation. This can essentially be a way out.
There’s nothing wrong with harmless fun, but where is the line? Should our curiosity have to tame our caution?
As an emerging digital technology this is a space we’ll be watching very carefully. There is some hope that things won’t slide into scary territory. Microsoft have already started developing software that can detect a deepfake so we are on the front foot. Nonetheless, the applications for deepfakes are endless and we anticipate things will evolve quickly.
Watch this space!