Election deepfakes could undermine institutional credibility, Moody’s warns

News

With election season underway and artificial intelligence evolving rapidly, AI manipulation in political advertising is becoming an issue of greater concern to the market and economy. A new report from Moody’s on Wednesday warns that generative AI and deepfakes are among the election integrity issues that could present a risk to U.S. institutional credibility.

“The election is likely to be closely contested, increasing concerns that AI deepfakes could be deployed to mislead voters, exacerbate division and sow discord,” wrote Moody’s assistant vice president and analyst Gregory Sobel and senior vice president William Foster. “If successful, agents of disinformation could sway voters, impact the outcome of elections, and ultimately influence policymaking, which would undermine the credibility of U.S. institutions.” 

The government has been stepping up its efforts to combat deepfakes. On May 22, Federal Communications Commission Chairwoman Jessica Rosenworcel proposed a new rule that would require political TV, video and radio ads to disclose if they used AI-generated content. The FCC has been concerned about AI use in this election cycle’s ads, with Rosenworcel pointing out potential issues with deep fakes and other manipulated content.

Social media has been outside the sphere of the FCC’s regulations, but the Federal Elections Commission is also considering widespread AI disclosure rules which would extend to all platforms. In a letter to Rosenworcel, it encouraged the FCC to delay its decision until after the elections because its changes would not be mandatory across digital political ads. They added could confuse voters that online ads without the disclosures didn’t have AI even if they did.

While the FCC’s proposal might not cover social media outright, it opens the door to other bodies that can regulate ads in the digital world as the U.S. government moves to become known as a strong regulator of AI content. And, perhaps, those rules could extend to even more types of advertising. 

“This would be a groundbreaking ruling that could change disclosures and advertisements on traditional media for years to come around political campaigns,” said Dan Ives, Wedbush Securities managing director and senior equity analyst. “The worry is you cannot put the genie back in the bottle, and there are many unintended consequences with this ruling.” 

Some social media platforms have already self-adopted some sort of AI disclosure ahead of regulations. Meta, for example, requires an AI disclosure for all of its advertising, and it is banning all new political ads the week leading up to the November elections. Google requires all political ads with modified content that “inauthentically depicts real or realistic-looking people or events” to have disclosures, but doesn’t require AI disclosures on all political ads.

The social media companies have good reason to be seen as proactive on the issue as brands worry about being aligned with the spread of misinformation at a pivotal moment for the nation. Google and Facebook are expected to take in 47% of the projected $306.94 billion spent on U.S. digital advertising in 2024. “This is a third rail issue for major brands focused on advertising during a very divisive election cycle ahead and AI misinformation running wild. It’s a very complex time for advertising online,” Ives said. 

Despite self-policing, AI-manipulated content does make it on platforms without labels because of the sheer amount of content posted every day. Whether its AI-generated spam messaging or large amounts of AI imagery, it’s hard to find everything. 

“The lack of industry standards and rapid evolution of the technology make this effort challenging,” said Tony Adams, Secureworks Counter Threat Unit senior threat researcher. “Fortunately, these platforms have reported successes in policing the most harmful content on their sites through technical controls, ironically powered by AI.”

It’s easier than ever to create manipulated content. In May, Moody’s warned that deep fakes were “already weaponized” by governments and non-governmental entities as propaganda and to create social unrest and, in the worst cases, terrorism.

“Until recently, creating a convincing deepfake required significant technical knowledge of specialized algorithms, computing resources, and time,” Moody’s Ratings assistant vice president Abhi Srivastava wrote. “With the advent of readily accessible, affordable Gen AI tools, generating a sophisticated deep fake can be done in minutes. This ease of access, coupled with the limitations of social media’s existing safeguards against the propagation of manipulated content, creates a fertile environment for the widespread misuse of deep fakes.”

Deep fake audio through a robocall has been used in a presidential primary race in New Hampshire this election cycle.

One potential silver lining, according to Moody’s, is the decentralized nature of the U.S. election system, alongside existing cybersecurity policies and general knowledge of the looming cyberthreats. This will provide some protection, Moody’s says. States and local governments are enacting measures to block deepfakes and unlabeled AI content further, but free speech laws and concerns over blocking technological advances have slowed down the process in some state legislatures.

As of February, 50 pieces of legislation related to AI were being introduced per week in state legislatures, according to Moody’s, including a focus on deepfakes. Thirteen states have laws on election interference and deepfakes, eight of which were enacted since January.

Moody’s noted that the U.S. is vulnerable to cyber risks, ranking 10th out of 192 countries in the United Nations E-Government Development Index.

A perception among the populace that deepfakes have the ability to influence political outcomes, even without concrete examples, is enough to “undermine public confidence in the electoral process and the credibility of government institutions, which is a credit risk,” according to Moody’s. The more a population worries about separating fact from fiction, the greater the risk the public becomes disengaged and distrustful of the government. “Such trends would be credit negative, potentially leading to increased political and social risks, and compromising the effectiveness of government institutions,” Moody’s wrote.

“The response by law enforcement and the FCC may discourage other domestic actors from using AI to deceive voters,” Secureworks’ Adams said. “But there’s no question at all that foreign actors will continue, as they’ve been doing for years, to meddle in American politics by exploiting generative AI tools and systems. To voters, the message is to keep calm, stay alert, and vote.”