Sophisticated mobile camera applications have gone beyond mere creating fake legs and arms or removing blemishes, wrinkles and pimples or transforming the gender like FaceApp. Some users have even created false videos that look very real.
Dubbed “deepfakes,” the technology uses deep learning, a branch of machine learning, to create fake videos that are hard to detect. For instance, if you had seen Barack Obama calling Donald Trump a “complete idiot”, or Jon Snow apologising for the dismal ending to Game of Thrones, you have witnessed fake videos.
Deepfakes aren’t new, but they haven’t been seen as a real threat until recent rise. According to a CNN business report, between December 2018 and October 2019, online deepfake videos saw a sudden spike by 84%. This was just among those experts could find.
Public figures, celebrities, political leaders are few who fall victims to such deepfake attacks. A deepfake video of Mark Zuckerberg was uploaded on Instagram that led to heated speculation as how the company would react to apparent defamation against its founder.
According to Matt Groh, a research assistant with the Affective Computing Group at the MIT Media Lab, a facial recognition algorithm and a deep learning computer network called a variational auto-encoder [VAE] is used by creators to replace a person’s face with another.
In neural net language, these VAEs consists of an encoder that codes images into low-dimensional representations and then decode those representations back into images. Put simply, they can create images of fictional celebrity faces and high-resolution digital artwork.
The deepfakes method is known for producing convincing face swaps and has achieved broad media attention.
That said, before you forward a controversial or interesting video, make sure that it is a real one, as deepfakers have been lurking on the internet for years. Unfortunately, it isn’t easy to spot a deepfake video as they are designed to look so close to reality.
But there are few ways to spot a manipulated video:
If the person’s features or surroundings seem edited, it is likely to be a deepfake video.
Observe closely if the subject in the video blinks. If not, it might be a manipulated video. According to US researchers, AI hasn’t figured out the physical feature to replicate eye blinking. But as soon as the research was published in 2018, deepfakers came up with videos that were blinking. It is universal nature that: as soon as a fault is revealed, it is fixed.
Look for lip synching, patchy skin tone or lighting. Poor-quality deepfakes are likely to expose strange lighting effects or badly rendered background.
Tech giants, governments and institutions are increasingly involved in forming research teams, competing for supremacy in the deepfake detection game.
Companies have been quickly creating plans to combat, every time a deepfake video is created or goes viral. For search engine giants like Google, creating right tools to spot deepfakes has become an urgent need.
Microsoft for instance, announced this September, two new technologies to combat disinformation and deepfakes to make people aware of the problem. The tech giant has introduced ‘Video Authenticator’ that can analyse a still photo or a video and provide a confidence score, that the media is artificially manipulated.
“It works by detecting the blending boundary of the deepfake and subtle fading or greyscale elements that might not be detectable by the human eye,” the company said.
Another technology announced by Microsoft has two components that can both detect manipulated content and assure people that the media they’re viewing is authentic. The announcement read,
“The first is a tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content. The second is a reader – which can exist as a browser extension or in other forms – that checks the certificates and matches the hashes, letting people know with a high degree of accuracy that the content is authentic.”
According to Nick Dufour, one of the Google engineers overseeing the company’s deepfake research, they have “allowed people to claim that video evidence that would otherwise be very convincing is a fake.” He told to The New York Times.
Facebook in June 2020, released a database of 100,000 deepfakes in order to teach AI how to spot those. The social media firm also introduced Deepfake Detection Challenge (DFDC) to accelerate development of new ways to detect deepfakes.
A recent UCL report said that, experts have ranked manipulated video/audio as the most worrying use of artificial intelligence in terms of its applications for crime.
According to experts, the most insidious impact of deepfakes is to create zero trust society, where people cannot or don’t bother to distinguish truth from fake. When trust is broken, its easy to raise doubts and ambiguities about certain events.
Legal experts say that deepfakes are “not illegal” in itself but might infringe copyright, breach data protection law, and be defamatory if it exposes the victim to ridicule.
Ironically artificial intelligence could be a solution to this risk. AI already helps to identify fake videos and a blockchain online ledger system could hold a tamper-proof record of videos, audios and pictures so that their origin and if any manipulation, can always be traced, researchers say.
But even with those possible solutions, many experts agree that deepfakes could have much more of a negative light than any lasting beneficial impact.
“It’s not like up until deepfakes everything was fine, that there were no misinformation campaigns, there were no fake images, no fake videos – of course there were. The difference now, besides the scary-sounding name, is the democratization of access to technology,” Hany Farid, an expert in digital forensics and professor at University of California, Berkeley, told a media outlet.
Businesses find deepfakes, a difficult issue to handle. Widespread deepfakes will soon become a favorite tool for many hackers who know the strategy well and have the equipment and patience to make it work for them.
When it comes to securing businesses from deepfakes, one must identify the avenues where risks are more apparent. Along with it, businesses must
Matt Groh of MIT stated that people can defend themselves against falling victims to deepfakes, by using their own intellect. “You have to be a little skeptical, you have to double-check and be thoughtful,” Groh said.
Individuals on the other hand, must keep their video and photo storage secure. Those apps that use mobile camera or microphone, need to be refreshed and updated. Internet safety and best practices must always be followed to keep data safe from harm.
SOURCES: MITSloan, UCL, Microsoft, NYTimes, Facebook
Follow Asia Blockchain Review on:
Sujha has been writing and reporting on cryptocurrencies and blockchain technology developments since 2014. Her work has appeared in CoinDesk, CCN, EconoTimes and Fintech News Malaysia. She is also an accomplished Indian classical singer and loves baking cakes.
We provide information about Asia Blockchain Review latest activities as well as global blockchain news and research. Subscribe to our Newsletter now or Contact us