By Faith Talamantez

Advancements in artificial intelligence have created new and increasingly dangerous ways for people to be scammed, tricked and even threatened on the internet, UC Berkeley professor Hany Farid told a group of nearly 100 UC Santa Barbara students and faculty last week. 

“You should know about this,” he said. “It’s going to happen to someone in this audience. Somebody has cloned your voice.” 

Farid was hosted by the Interdisciplinary Humanities Center as part of its speaker series “Too Much Information” that examines life in an increasingly complex digital age. The IHC’s director Susan Derwin moderated the virtual event.

UC Berkeley computer science and engineering professor Hany Farid, top, and Susan Derwin, director of the Interdisciplinary Humanities Center at UCSB, during the IHC’s “TMI” lecture Deep Fake Technology and Detection.

Generative AI systems such as ChatGPT have quickly gained popularity in online communities, but with access to so much information online, cases of it being used maliciously have increasingly come to light. Farid showed an image that he created using ChatGPT that depicted Facebook founder Mark Zuckerberg smoking methamphetamine, an image that was entirely fake, but shows the potential to cause harm and spread misinformation about real people.

“You can imagine people doing this to politicians, doing this to CEOs, the way I’ve just done it [with Mark Zuckerberg],” Farid said. “Doing it to professors, teachers — which is by the way happening —to reporters, to human rights activists, to people they just don’t like. Creating images that in some cases, as they get more and more realistic, can be very damaging.”

Farid’s research covers a broad range of topics within both engineering and computer science, including digital forensics and image analysis. During the IHC webinar, Farid said artificial intelligence has developed to create detailed images based on a compilation of all of the data it has access to, with the potential to negatively impact our day to day lives through exposure to scams and fakery. Research he and others are doing to combat such dangers via detection and identification has become more urgent, he said.

Farid  said ChatGPT has already changed the way people are using the internet. 

“As someone who teaches intro to computer science this is both a blessing and a curse,” he said. 

He compared generative AI to autocomplete, in that it tries to predict what users will type next. Farid explained how AI generates different forms of media, including audio, text, video, and images. A generator is used to “scrape,” or collect billions of samples of data from the internet, which it then uses to generate something based on a user’s prompt. With modern advancements, generative AI has the ability to create more than it ever has before. “You can synthesize anything you want,” he said.

UC Berkeley computer scientist Hany Farid presented an example of text generated by AI in response to a prompt he wrote about comedian Jerry Seinfeld.

With the quick rise of AI, there is also “a startling rise in phone scams,” Farid said. He described how AI has been used to extort money from victims, mainly through these phone scams. Generative AI has been used by scammers to clone people’s voices, to create fake videos asking for money, and to create fake images of people participating in illegal activities. 

“What used to be an email or a text message is now an actual phone call. And the way that it is happening is somebody has cloned your voice,” Farid said. “It’s not just people who are famous. People are grabbing voices from YouTube videos, TikTok videos, and sometimes your very voicemail.”

He said a major new problem with this type of AI is that it is cheap and openly available. “It is exceedingly easy to access,” he said. Services that were previously unavailable to the general public, such as voice cloning, are now easily accessible to anyone willing to pay $5 for the service. This poses a huge threat to cybersecurity, as well as people’s privacy online, he warned. 

Generative AI has already been challenged in the courts. The software collects data through scraping, which is indiscriminate in the information it collects. So private medical records have been used by AI  simply because they were stored in an online database that was not well enough protected. Farid also shared stories of people, such as tech entrepreneur Elon Musk, who claim video or audio recordings of them saying bad or controversial things were generated by artificial intelligence. 

As more privacy and defamation cases involving AI come up, questions are arising as to who can be held accountable for the faults in AI, and the harm it has already caused to many people. 

Research that Farid and others are conducting aims to detect generative AI, which can identify nefarious applications, before they get too far. Farid graphically illustrated his point with side-by-side images – some authentic and some AI generated.  

An image shared by UC Berkeley computer scientist Hany Farid during his UCSB talk. It shows a split of real faces and AI generated faces.

“These images are exceedingly difficult to distinguish from reality,” he said. “But elements used to build AI images can be exploited as a way to detect when an image is artificially generated,” he said. Using these ‘classifiers’ to distinguish between AI and real images allows Farid to “catch almost all of the fakes, with only a 1% error rate.”

Though there are many harms alongside the benefits of AI, it is impossible to turn back the clock and just get rid of it. Instead, Farid  advocates that new policies be created around the world that limit the scope of artificial intelligence.

“Susan [Derwin] asked me when I came on this call if I was real. That’s just the world we live in today,” Farid said.

Faith Talamanetz is a second-year Writing and Literature major at the College of Creative Studies at UC Santa Barbara. She wrote this article for her Digital Journalism course.

Lian Benasuly, a UCSB Humanities and Fine Arts web and social media intern in the Journalism track of the Professional Writing Minor, also contributed reporting and editing for this article.