Pakistanis, Bangladeshi politicians are new targets of deepfake, 90 per cent of videos online are pornographic

Bangkok (TIP) : There was the Bollywood star in skin-tight lycra, the Bangladeshi politician filmed in a bikini and the young Pakistani woman snapped with a man.
None was real, but all three images were credible enough to unleash lust, vitriol – and even allegedly a murder, underlining the sophistication of generative artificial intelligence, and the threats it poses to women.
The two videos and the photo were deepfake, and went viral in a vibrant social mediascape that is struggling to come to grips with the technology that has the power to create convincing copies that can upend real lives.
“We need to address this as a community and with urgency before more of us are affected by such identity theft,” actor Rashmika Mandanna said in a post on X, that has garnered more than 6.2 million views.
She is not the only Bollywood star to be cloned and attacked on social media, with top actors including Katrina Kaif, Alia Bhatt and Deepika Padukone also targeted with deepfakes.
The lycra video, said Mandanna, was “extremely scary not only for me, but also for each one of us who today is vulnerable to so much harm because of how technology is being misused”. While digitally manipulated images and videos of women were once easy to spot, usually lurking in the dark corners of the internet, the explosion in generative AI tools such as Midjourney, Stable Diffusion and DALL-E has made it easy and cheap to create and circulate convincing deepfakes.
More than 90% of deepfake videos online are pornographic, according to tech experts, and most are of women.
While there are no separate data for South Asian countries, digital rights experts say the issue is particularly challenging in conservative societies, where women have long been harassed online and abuse has gone largely unpunished.
Social media firms are struggling to keep up.
Google‘s YouTube and Meta Platforms – which owns Facebook, Instagram and WhatsApp – have updated their policies, requiring creators and advertisers to label all AI-generated content.
But the onus is largely on victims – usually girls and women – to take action, said Rumman Chowdhury, an AI expert at Harvard University who previously worked at reducing harms on Twitter.
“Generative AI will regrettably supercharge online harassment and malicious content … and women are the canaries in the coal mine. They are the ones impacted first, the ones on whom the technologies are tested,” she said. “It is an indication to the rest of the world to pay attention, because it’s coming for everyone,” Chowdhury told a recent United Nations briefing.
As deepfakes have proliferated worldwide, there are growing concerns – and rising instances – of their use in harassment, scams and sextortion.
Regulations have been slow to follow.
The US Executive Order on AI touches on dangers posed by deepfakes, while the European Union’s proposed AI Act will require greater transparency and disclosure from providers.
Last month, 18 countries – including the United States and Britain – unveiled a non-binding agreement on keeping the wider public safe from AI misuse, including deepfakes.
Among Asian nations, China requires providers to use watermarks and report illegal deepfakes, while South Korea has made it illegal to distribute deepfakes that harm “public interest”, with potential imprisonment or fines.
(Reuters)

Be the first to comment

Leave a Reply

Your email address will not be published.