President Joe Biden in his recent State of the Union speech urged Congress to address the potential “peril” of artificial intelligence, but he was light on specifics beyond calling for a ban on AI-driven voice impersonations, just one of several major concerns related to the technology.
“Harness the promise of AI to protect us from peril,” Biden said in the speech, delivered on Thursday night. “Ban AI voice impersonations and more.”
The president’s remarks come on the heels of a fake robocall campaign in which his voice was cloned and New Hampshire primary voters were targeted, raising alarms about the technology’s potential to wreak havoc in U.S. elections.
AI’s potential as a tool for cybercrime and fraud attempts against businesses has also posed growing concerns. In a recent example, a finance worker at a multinational firm was reportedly tricked into paying $25 million to fraudsters who used AI-generated “deepfake” technology to pose as the company’s CFO in a video conference call. Deepfakes are AI-manipulated images, videos, or audio recordings that are bogus yet convincing.
“While AI technologies have very legitimate use cases, they also can be used by malicious actors to cause harm,” Matthew Miller, a principal in the cybersecurity practice of KPMG, said in an interview.
A study unveiled last year by Regula, a provider of identity verification solutions, found that 37% of businesses had experienced deepfake voice fraud, while 29% had fallen victim to deepfake videos.
The rapid acceleration of AI adoption, fueled by recent advancements by companies such as Microsoft-backed OpenAI, has received growing attention from the White House and Congress.
Last October, Biden signed a sweeping executive order mandating that the Labor Department lead the development of a set of standards to guide companies in addressing AI’s potential harm to their workers, including eroded privacy and job displacement. He also directed the National Institute of Standards and Technology within the Commerce Department to establish guidelines to promote “safe, secure, and trustworthy” AI systems.
The White House has also released a non-binding blueprint for an "AI bill of rights", including principles such as: people should be protected from systems deemed “unsafe or ineffective” and shouldn’t be discriminated against using algorithms. In addition, it has secured voluntary AI commitments from Big Tech companies including Microsoft, Google, Amazon and Meta.
“From the AI Bill of Rights to the voluntary commitments the president secured from leading tech companies, to his executive order, President Biden has been clear about the principals of trust and safety he wants to see in legislation that regulates AI,” White House spokesperson Robyn Patterson told CFO Dive in an email. “Congress will determine its exact approach. But the president is urging Congress to move forward swiftly and in a bipartisan manner.”
Biden’s speech on Thursday included only a brief mention of AI among a laundry list of legislative priorities.
Still, he made history by mentioning AI in a State of the Union speech for the first time, Fei Fei Li, co-director of the Human-Centered AI Institute at Stanford University, told NPR on Friday. When asked by NPR's Ari Shapiro why the president decided to zero in on voice deepfakes in particular, she said: “Specifically why, I don’t know. But I think this is symbolic [of] the double-edged sword nature of a powerful technology. While we celebrate AI’s potential… we do have to recognize the responsibility of not using this technology for harm.”
Jason Oxman, president and CEO of the Information Technology Industry Council, a technology trade association, said in an emailed statement that his group supports Biden’s call for harnessing the power of AI and protecting consumers against misuse of the technology.
“As to one misuse of AI he highlighted (perhaps because it happened to him), the Federal Communications Commission ruled last month that robocalls using AI-generated voice impersonations are illegal,” Oxman said. “It’s an important reminder that targeting illegal behavior — not AI itself — is the right approach.”
More than 70 AI-related bills are pending before Congress, according to a database maintained by the American Action Forum, a Washington, D.C. think tank. The list includes dozens of measures aimed at mitigating the technology’s potential harms, ranging from workplace surveillance and bias in algorithmic systems to the generation of deepfakes.
Last month, the FCC approved a declaratory ruling that bans the use of cloning technology in robocall scams.
“This unanimous ruling by the FCC is a big step toward protecting Americans and holding scammers accountable, but more needs to be done to prevent the fraudulent use of artificial intelligence,” Sen. Amy Klobuchar (D-Minn.) said in a statement at the time.
A bill (S. 2770) introduced by Klobuchar last September would prohibit the distribution of “materially deceptive” AI-generated audio or visual media relating to candidates for federal office.
Another measure (H.R. 5586), introduced by Rep. Yvette Clarke (D-N.Y.), would create new criminal offenses related to the production of malicious deepfakes. The bill includes exceptions for parodies, satire, consensual deepfakes, and other types of fictionalized content.
Despite a significant amount of interest around AI on Capitol Hill, lawmakers are unlikely to have time to consider any meaningful legislation on the issue in 2024, due to a tight election-year calendar, according to Scott Gerber, a partner at Vrge Strategies, a Washington, D.C.-based public affairs firm.
“That doesn't mean that a few bills which nibble around the edges won’t pass, but comprehensive legislation will almost certainly be put off for another time,” he said in a Jan. 30 article published on LinkedIn.