Explainers
Tangled stories and trends that make headlines, but are sometimes hard to get a grasp on. Hop in to learn more about the world around us.

Unveiling Dark Side of AI: Why Deepfakes Are a New Pandemic?

CC0 / Midjourney AI / An Artificial Intelligence's Interpretation of Police Using Artificial Intelligence to Predict Crime. Created by Midjourney AI v5, October 3, 2023
An Artificial Intelligence's Interpretation of Police Using Artificial Intelligence to Predict Crime. Created by Midjourney AI v5, October 3, 2023 - Sputnik India, 1920, 20.11.2023
Subscribe
India has witnessed a surge in the prevalence of deepfake videos featuring celebrities and politicians. Sputnik India endeavors to assess whether society is equipped to handle this situation.
As Artificial intelligence (AI) flourishes, it brings benefits and challenges problems concerning individuals and society.
There has been a recent surge of deepfake videos circulating on social media, featuring manipulated faces of actors. This has ignited worry about the potential misuse of AI technology, prompting people to demand legal measures against such videos.

Artificial Intelligence (AI), initially developed to assist humans and automate monotonous tasks, is now causing distress and harassing individuals.

Sputnik India spoke with cybercrime consultant Ritesh Bhatia who's been working in cybersecurity field for past twenty years. The expert explained why deepfake videos and audio are a new pandemic.

Deepfakes Became Easy Tool for Hackers

“Artificial Intelligence (AI) has been going on for many years and every technology has good and bad things to it. The misuse of technology began in 2020," Ritesh Bhatia, a cybercrime investigator and TEDx speaker told Sputnik India.
Bhatia said that photos and videos shared by social media users became easy tool for hackers to misuse the data available on social media.
Deepfakes were born in 2017 when a Reddit user posted doctored porn clips on the site, in which the face of celebrities – Gal Gadot, Taylor Swift, Scarlett Johansson and others – were added to porn performers. However, it came into limelight recently in India when photos and videos of celebrities were publicly shared on social media.
“It takes five seconds to make a deepfake video. You simply feed encoded images into the “wrong” decoder to perform the face swap. For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A. For a convincing video, this has to be done on every frame,” Bhatia explained.
Artificial Intelligence (AI) is excellent, but the darker side of it is already out of the bottle, Bhatia said.

Will Regulating Technology Help?

“You can’t regulate the technology. Stopping a technology to grow is not a step ahead. But, all the videos and audio generated through AI should have water mark. All social media platform, while uploading the video or photo can put a warning like watch at your discretion or this post could be fake. People need to be warned,” Bhatia said.

The cybercrime expert said that a single watermark can differentiate between real and AI-generated video, which is also important.
“We need to ensure that deepfake is not used as excuse. Otherwise, there will be battle between AI and humans. Whom do you trust?”

How AI Impacts Election?

Currently, election in five Indian states is being held, and many parties in these states have complained to the election commission about their deep fake videos and audio being shared on social media.
“In Madhya Pradesh, both BJP and Congress have registered more than two dozens compaints of their doctored videos. Similarly, in one of the fake videos of state chief Shivraj Singh Chouhan is shown telling ministers and bureaucrats that people are not happy with his party,” a senior journalist Ramesh Raja told Sputnik India.
Later, after these incidents, political parties held press conferences and posted on social media that the alleged audio of the video file was doctored, Raja shared.

Who to Trust - Humans or AI?

Bhatia warned, that people should not trust whatever they see on social media, especially during elections.

“I have a simple approach towards the situation—POV, which meant that pause after seeing a video (P), zero trust (O) and then try to verify the video or audio (V),” Bhatia stressed.

The cybersecurity expert said this is a digital pandemic and awareness needs to be created soon, otherwise ordinary people will be blasted with fake videos.
Internet users - Sputnik India, 1920, 11.11.2023
Sputnik Specials
Deepfakes: New Weapon to Extort, Blackmail & Harass Women
Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала