Technology has seen immense progress in the past few years, especially in the areas of artificial intelligence (AI) and machine learning (ML). Could you imagine a machine answering and solving your business queries even a decade ago?
Now, chatbots are not only a reality but are commonplace in all kinds of customer-facing businesses. Moving forward, we are now talking about technology that can read your mind and get things done without you having to even speak. Is it all good though? You decide for yourself.
Have you heard of deepfakes? They are fake videos, pictures or audios that uses AI and ML to look and sound just like the original. That app you use on Facebook to imprint your face on a celebrity video?
You are actually using a tool to create a deepfake. It is all fun and games until it is used for much more sinister purposes. Just consider the things that can be done with deepfakes, destroy a marriage with a false recordings, create unseemly content using pictures stolen from the internet and even cause riots in a country by making politicians say something controversial.
As Marco Rubio, USA presidential candidate 2016 and the Republican senator from Florida rightly pointed out, “It is the modern equivalent of nuclear weapons”. While this can seem like an exaggeration to many but no one can deny that deepfakes can have much deeper impacts on the gullible general public.
Technology has to take care of the Frankenstein it has created. It is getting harder and harder to detect doctored videos but researchers are coming up with their own AI and ML tools to combat the problem.
Microsoft, for instance, recently announced the launch of Video Authenticator. This new technology can analyse a video or still photo to give a confidence percentage or a score on the chances of it being fake or manipulated.
Built by Microsoft Azure and Microsoft Research together with the Defending Democracy Program, the technology works by detecting subtle fading or the blending boundary or grayscale element on the deepfake that naked eye can miss easily.
There are two crucial components to the technology. The first one is built within Microsoft Azure, letting the original content creator add certificates and hashes to the content which travels with it as metadata. The second part is a reader that can identify these hashes and certificates to analyse the degree of authenticity of the content.
Deepfakes have come a long way with the help of AI and machine learning. It is already almost impossible to tell the difference between an original and manipulated content.
There are also very few tools available today that can help people determine the authenticity of what they are viewing. The bad news is it will continue to grow and become more sophisticated with passing time.
On the other hand, the good news is companies like Microsoft are researching and finding solutions to contest this problem and hopefully the good side will win.
Follow Asia Blockchain Review on:
Srimayee has over 10 years of experience in creating content. Driven by her passion for writing, She has had her articles published with a byline in newspapers, magazines, blogs, and websites. Her other passions include reading, gardening and traveling.
We provide information about Asia Blockchain Review latest activities as well as global blockchain news and research. Subscribe to our Newsletter now or Contact us