Home Business GenAI apps are cloning your likeness without consent–and might make you famous for all the wrong reasons

GenAI apps are cloning your likeness without consent–and might make you famous for all the wrong reasons

0
GenAI apps are cloning your likeness without consent–and might make you famous for all the wrong reasons

[ad_1]

One Friday evening a few weeks ago, I was in my home country Romania, visiting family for a funeral, when I found myself thinking: Was it time for me to start teaching my kids how to speak Romanian? For the past 15 years, I have built a life in the U.K. where my kids were born and raised. They love their Romanian grandparents but struggle to communicate with them–and I wanted to do something about it.

So I started looking for solutions. I searched the internet for about an hour but couldn’t find anything useful, so I went back to my evening.

A few days later, I was scrolling through my Instagram feed when an ad appeared for a language learning app. Having worked for a social media company, I knew what had happened: The company had tracked my activity online, saw I was interested in language learning apps, and decided to target me with an ad. And that’s okay: I’ve had similar experiences in the past and even decided to buy products based on this type of targeted advertising.

Over the next few days, I kept getting more and more ads from the same language app. But once I started to pay closer attention, I realized there was something more troubling going on.

While some of the ads had real people excitedly encouraging me to download the app and try it out “risk free,” other ads looked eerily familiar. They featured people speaking directly to me in French or Chinese, claiming to have mastered a foreign language in mere weeks thanks to the app’s miraculous capabilities. However, what was really going on was not actually miraculous but alarming: the videos were manipulated through deepfake technology, potentially without the consent of the people featured in them.

While AI-generated media can be used for harmless entertainment, education, or creative expression, deepfakes have the potential to be weaponized for malicious purposes, such as spreading misinformation, fabricating evidence, or, in this case, perpetrating scams.

Because I’ve been working in AI for almost a decade, I could easily spot that the people in these ads weren’t actually real, nor were their language skills. Instead, I came to learn thanks to an investigation by Sophia Galer-Smith that an app had been used to clone real people without their knowledge or permission, eroding their autonomy and potentially damaging their reputations.

A troubling aspect of these deepfake ads was the lack of consent inherent in their creation. The language app likely used the services of a video cloning platform developed by a Chinese generative AI company that has changed its name four times in the last three years and does not have any measures in place to prevent the unauthorized cloning of people or any obvious mechanisms to remove someone’s likeness from their databases.

This exploitation is not only unethical but also undermines trust in the digital landscape, where authenticity and transparency are already in short supply. Take the example of Olga Loiek, a Ukrainian student who owns a YouTube channel about wellness. She was recently alerted by her followers that videos of her had been appearing in China. On the Chinese internet, Loiek’s likeness had been transformed into an avatar of a Russian woman looking to marry a Chinese man. She found that her YouTube content had been fed into the same platform that was used to generate the scam ads I’d been seeing on Instagram and an avatar bearing her likeness was now proclaiming love for Chinese men and praising Russia’s military might on Chinese social media apps. Not only was this offensive to Loiek on a personal level because of the war in Ukraine, but it was the type of content she would have never agreed to participate in if she had had the option of withholding her consent.

I reached out to Ms Loiek to get her thoughts on what happened to her.  Here’s what she had to say: “Manipulating my image to say statements I would never condone violates my personal autonomy and means we need stringent regulations to protect individuals like me from such invasions of identity.”

Consent is a fundamental principle that underpins our interactions in both the physical and digital realms. It is the cornerstone of ethical conduct, affirming individuals’ rights to control their own image, voice, and personal data. Without consent, we risk violating people’s privacy, dignity, and agency, opening the door to manipulation, exploitation, and harm.

In my job as the head of corporate affairs for an AI company, I’ve worked with a campaign called #MyImageMyChoice, trying to raise awareness of how non-consensual images generated with deepfake apps have ruined the lives of thousands of girls and women. In the U.S., one in 12 adults have reported that they have been victims of image-based abuse. I’ve read harrowing stories from some of these victims who’ve shared how their lives were destroyed by images or videos generated by AI apps. When they tried to issue DMCA takedowns to these apps, they received no reply or were told that the companies behind the apps were not subject to any such legislation.

We’re entering an era of the internet where more and more of the content we see will be generated with AI. In this new world, consent takes on heightened importance. As the capabilities of AI continue to advance, so too must our ethical frameworks and regulatory safeguards. We need robust mechanisms to ensure that individuals’ consent is obtained and respected in the creation and dissemination of AI-generated content. This includes clear guidelines for the use of facial and voice recognition technology, as well as mechanisms for verifying the authenticity of digital media.

Moreover, we must hold accountable those who seek to exploit deepfake technology for fraudulent or deceptive purposes and those who release deepfake apps that have no guardrails in place to prevent misuse. This requires collaboration between technology companies, policymakers, and civil society to develop and enforce regulations that deter malicious actors and protect users from real-world harm, instead of focusing only on imaginary doomsday scenarios from sci-fi movies. For example, we should not allow video or voice cloning companies to release products that create deepfakes of individuals without their consent. And during the process of obtaining consent, perhaps we should also mandate that these companies introduce informational labels that tell users how their likeness will be used, where it will be stored, and for how long. Many consumers might glance over these labels, but there can be real consequences to having a deepfake of someone stored on servers in countries such as China, Russia, or Belarus where there is no real recourse for victims of deepfake abuse. Finally, we need to give people mechanisms of opting out of their likeness being used online, especially if they have no control over how it is used. In the case of Loiek, the company that developed the platform used to clone her without her consent did not provide any response or take any action when they were approached by reporters for comment.

Until better regulation is in place, we need to build greater public awareness and digital literacy efforts to empower individuals to recognize manipulation and safeguard their biometric data online. We must empower consumers to make more informed decisions about the apps and platforms they use and to recognize the potential consequences of sharing personal information, especially biometric data, in digital spaces and with companies that are prone to government surveillance or data breaches.

Generative AI apps have an undeniable allure, especially for younger people. But when people upload images or videos containing their likeness to these platforms, they unknowingly expose themselves to a myriad of risks, including privacy violations, identity theft, and potential exploitation.

While I am hopeful that one day my children can communicate with their grandparents with the help of real-time machine translation, I am deeply concerned about the impact of deepfake technology on the next generation, especially when I look at what happened to Taylor Swift, or the victims who’ve shared their stories with #MyImageMyChoice, or countless other women suffering from sexual harassment and abuse who have been forced into silence.

My children are growing up in a world where digital deception is increasingly sophisticated. Teaching them about consent, critical thinking, and media literacy is essential to helping them navigate this complex landscape and safeguard their autonomy and integrity. But that’s not enough: We need to hold the companies developing this technology accountable. We also must push governments to take action faster. For example, the U.K. will soon start to enforce the Online Safety Bill, which criminalizes deepfakes and should force tech platforms to take action and remove them. More countries should follow their lead.

And above all, we in the AI industry must be unafraid to speak out and remind our peers that this freewheeling approach to building generative AI technology is not acceptable.

Alexandru Voica is the head of corporate affairs and policy at Synthesia, and a consultant for Mohamed bin Zayed University of Artificial Intelligence.

More must-read commentary published by Fortune:

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.



[ad_2]

Source link