Over the years, AI has become an integral part of our daily lives, from creating beautiful images to enhancing our understanding of complex concepts. And now, several mind-blowing photos have recently been making waves online. Vladimir Putin kissed Xi Jinping’s hand while kneeling down and the pope wearing a puffy jacket. Well, the problem is none of these things ever truly occurred. They’re the product of advanced AI technology known as deepfakes.
Deepfakes are becoming more challenging to detect. According to ExpressVPN, the number of deepfakes on the internet today is in the millions, and they can be used to manipulate and alter people’s memories. These AI-based images and videos blur the line between fact and fiction. This phenomenon is known as the Mandela Effect. So what exactly is the Mandela Effect, and how is it affecting us? Well, in order to know, you’ll have to continue reading the article.
What is The Mandela Effect?
The Mandela Effect is a psychological phenomenon that occurs when many collectively remember an event, fact, or detail differently from how it occurred. Fiona Broome, a specialist in the paranormal, first used the phrase in 2010.
The widely held misconception that South African leader Nelson Mandela died in prison in the 1980s served as the term’s inspiration. In reality, Mandela was released from prison in 1990 and became South Africa’s first black president, passing away in 2013.
The nature of memory, perception, and reality have all been hot topics of discussion and debate in the wake of the Mandela Effect. It has also been mentioned regarding various commonly held false memories, such as the incorrect spelling of well-known company names or iconic movie lines.
How do Deepfakes work?
A deepfake video exploits two machine learning (ML) models. The forgeries are made by one model using a database of sample videos, while the other model tries to determine whether the video is actually fake. The deepfake is likely convincing enough to a human viewer when the second model can no longer determine whether the video is fake. This technique is called generative adversarial network (GAN).
The GANs approach aims to identify forgery vulnerabilities so that improvements can be made to fix them. And the deepfake video is finished following numerous iterations of detection and optimization. When given a huge data set to work with, GAN performs better. That is why politicians and celebs are frequently seen in early deepfake videos. Also, GAN can use its extensive video library to produce incredibly lifelike deepfakes.
Are Deepfakes Dangerous?
Deepfakes are videos or images that frequently include individuals who have had their voices, faces, or bodies digitally altered such that they appear to be “saying” or to be someone else entirely. Deepfakes are often used to disseminate misleading information, or they may be utilized with hatred.
They may be made with the intention of intimidating, humiliating, and undermining someone. Deepfakes have the potential to spread false information and confusion about significant topics. As a result, it poses a threat in every possible way.
Deepfakes may pose a threat to both the state and the person. The same technology and communication channel are employed by both categories of threats. They also elicit comparable societal reactions. When considering the effects of usage, though, distinctions become apparent.
As governments and social media platforms assess its long-term effects, solutions to the deepfake problem will probably vary between the two categories. The great majority of dangers to the person are caused by nonconsensual pornography. In reality, a Reddit user with the same account is where the phrase “deepfake” first appeared. Through the production and distribution of fake pornographic videos, this user helped bring the technology into the general public’s awareness.
Usually, these videos contain the false likeness of celebrity women. Although counterfeit, these forged pornographic videos have real consequences. They frequently cause psychological harm to the victim, diminish employability and impact relationships. Additionally, criminals have threatened and intimidated politicians, journalists, and other semi-public figures using this tactic.
How can we detect it?
Detecting deepfakes is getting more difficult as the technology that creates deepfakes is getting more sophisticated. A Few years back, American researchers showed that deepfake faces didn’t blink like human faces do, which was thought to be a wonderful way to tell if photos and videos were real or fake.
Deepfake makers, however, started addressing the issue as soon as the report was published, making it even more challenging to identify deepfakes. The study that is intended to make deepfake technology more effective frequently just has the opposite effect.
However, not all deep fakes are the result of advanced technology. Poor-quality content is typically easy to spot since the lip syncs may be off, or the skin tone may seem strange. Therefore, it is challenging but not impossible to detect deepfakes. So, we must use extreme caution before judging any fake or misleading image or video.
Also read: What are The Benefits of Watching Live TV Using VPN?