Tesla CEO Elon Musk called on Wednesday for a US “referee” for artificial intelligence after he, Meta Platforms CEO Mark Zuckerberg, Alphabet CEO Sundar Pichai, and other tech CEOs met with lawmakers at Capitol Hill to discuss AI regulation. Lawmakers are seeking ways to mitigate the dangers of the emerging technology, which has boomed in investment and consumer popularity since the release of OpenAI’s ChatGPT chatbot.
Musk said there was a need for a regulator to ensure the safe use of AI.”It’s important for us to have a referee,” Musk told reporters, comparing it to sports. The billionaire, who also owns the social media platform X, added that a regulator would “ensure that companies take actions that are safe and in the interest of the general public.”
Musk said the meeting was a “service to humanity” and said it “may go down in history as very important to the future of civilization.” Musk confirmed he had called AI “a double-edged sword” during the forum.
Zuckerberg said Congress “should engage with AI to support innovation and safeguards. This is an emerging technology, there are important equities to balance here, and the government is ultimately responsible for that.” He added it was “better that the standard is set by American companies that can work with our government to shape these models on important issues.”
More than 60 senators took part. Lawmakers said there was universal agreement about the need for government regulation of AI. “We are beginning to really deal with one of the most significant issues facing the next generation and we got a great start on it today,” Democratic Senate Majority Leader Chuck Schumer, who organized the forum, told reporters after the meetings. “We have a long way to go.”
Republican Senator Todd Young, a co-host of the forum, said he believes the Senate is “getting to the point where I think committees of jurisdiction will be ready to begin their process of considering legislation.” But Republican Senator Mike Rounds cautioned it would take time for Congress to act. “Are we ready to go out and write legislation? Absolutely not,” Rounds said. “We’re not there.”
Lawmakers want safeguards against potentially dangerous deep fakes such as bogus videos, election interference, and attacks on critical infrastructure. Other attendees included Nvidia CEO Jensen Huang, Microsoft CEO Satya Nadella, IBM CEO Arvind Krishna, former Microsoft CEO Bill Gates, and AFL-CIO labor federation President Liz Shuler.
Schumer emphasized the need for regulation ahead of the 2024 US general election, particularly around deep fakes. “A lot of things that have to be done, but that one has a quicker timetable maybe than some of the others,” he said.
In March, Musk and a group of AI experts and executives called for a six-month pause in developing systems more powerful than OpenAI’s GPT-4, citing potential risks to society. Regulators globally have been scrambling to draw up rules governing the use of generative AI, which can create text and generate images whose artificial origins are virtually undetectable.
On Tuesday, Adobe, IBM, Nvidia, and five other companies said they had signed President Joe Biden’s voluntary AI commitments requiring steps such as watermarking AI-generated content. The commitments, announced in July, are aimed at ensuring AI’s power is not used for destructive purposes. Google, OpenAI, and Microsoft signed on in July. The White House has also been working on an AI executive order.
© Thomson Reuters 2023