What to know about Congress’s inaugural AI meeting


via MIT Technology Review https://ift.tt/RXVdSGu

This article is from The Technocrat, MIT Technology Review’s weekly tech policy newsletter about power, politics, and Silicon Valley. To receive it in your inbox every Friday, sign up here.

The US Congress is heading back into session, and they are hitting the ground running on AI. We’re going to be hearing a lot about various plans and positions on AI regulation in the coming weeks, kicking off with Senate Majority Leader Chuck Schumer’s first AI Insight Forum on Wednesday. This and planned future forums will bring together some of the top people in AI to discuss the risks and opportunities posed by advances in this technology and how Congress might write legislation to address them. 

This newsletter will break down what exactly these forums are and aren’t, and what might come out of them. The forums will be closed to the public and press, so I chatted with people at one company—Hugging Face—that did get the invite about what they are expecting and what their priorities are heading into the discussions.

What are the forums?

Schumer first announced the forums at the end of June as part of his AI legislation initiative, called SAFE Innovation. In floor remarks on Tuesday, Schumer said he’s planning for “an open discussion about how Congress can act on AI: where to start, what questions to ask, and how to build a foundation for SAFE AI innovation.” 

The SAFE framework, as a reminder, is not a legislative proposal but rather a set of priorities that Schumer laid out when it comes to AI regulation. Those priorities include promoting innovation, supporting the American tech industry, understanding the labor ramifications of AI, and mitigating security risks. Wednesday’s meeting is the first of nine planned sessions. Subsequent meetings will cover topics such as “IP issues, workforce issues, privacy, security, alignment, and many more,” Schumer said in his remarks.

Who is, and isn’t, invited?

The invite list for the first forum made a splash when it was made public two weeks ago. The list, first reported by Axios, numbers 22 people and includes lots of tech company executives who do plan on attending, such as OpenAI CEO Sam Altman, former Microsoft CEO Bill Gates, Alphabet CEO Sundar Pichai, Nvidia CEO Jensen Huang, Palantir CEO Alex Karp, X CEO Elon Musk, and Meta CEO Mark Zuckerberg. 

While a couple of civil society and AI ethics researchers were included—namely, AFL-CIO president Liz Shuler and AI accountability researcher Deb Raji—observers and prominenttech policy voices were quick to criticize the list, in part for its tilt toward executives poised to profit from AI.

The inclusion of so many tech leaders could be a political signal to reassure the industry. Tech companies, for the moment, are positioned to have a lot of power and influence over AI policy.

What can we expect out of them? 

We don’t really know what the outcomes of these forums will be, and considering that they are closed door, we might never really have full insight into the specifics of the conversations or their implications for Congress. They are expected to be listening sessions, where AI leaders will help to educate legislators on AI and questions about its regulation. In Schumer’s remarks from Tuesday, he said that “of course, the real legislative work will come in committees, but the AI forums will give us the nutrient agar, the facts and the challenges that we need to understand in order to reach this goal.”

The forums are considered classified, but if we do get some information about what was discussed, I’ll be listening for some potential themes for US AI regulation that I highlighted back in July: fostering the American tech industry, aligning AI with “democratic values,” and dealing with (or ignoring) existing questions about Section 230 and online speech. 

How are invitees preparing? 

I exchanged some emails with Irene Solaiman, the policy director of Hugging Face, a company that builds AI development tools based on an open-source framework. The CEO of Hugging Face, Clém Delangue, is one of the 22 people heading to the forum on Wednesday. Solaiman said the company is preparing as best as possible given what she called  “a firehose” of changing circumstances.

“We’re reviewing recent regulatory proposals to get a sense of Hill priorities,” said Solaiman, adding that they’re working with folks from their machine learning and R&D teams to prepare.  

As for Hugging Face’s political priorities, the company wants to encourage “more research infrastructure such as the great work being done at NIST [the National Institute of Standards and Technology] and funding the NAIRR [the National AI Research Resource]” and “to ensure the open-source community work is protected and recognized for its contribution to safer AI systems.”

Of course, other companies will also have their own strategies and agendas to push to Congress, and we will have to wait and see how it all shakes out. My colleague Melissa Heikkilä will also be covering this next week, so sign up for her newsletter, The Algorithm, to follow along.

What else I’m reading

What I learned this week

Google is in hot water for its ad policies again. A report published from the Global Project Against Hate and Extremism (GPAHE) found that Google was profiting from ads purchased by extremist groups based around the world, including far-right, racist, and anti-immigrant organizations from Germany, Bulgaria, Italy, France, and the Netherlands. (I recently wrote about how Google Ads have promoted and profited from AI-generated content farms.)

According to the report, “Google platformed 177 ads by far-right groups from 2019 to 2023 that were seen between a collective 55 and 63 million times before the company identified them as violative and took them down.” GPAHE reported that Google earned €62,000 to €85,000 for the ads, which might be insignificant for the company but still indicates a harmful incentive model. GPAHE also notes that its findings are not comprehensive.

Comments