The White House is hosting the first-ever meeting of chief executives of companies developing artificial intelligence on Thursday, as the boom in AI-powered chatbots spurs growing calls to regulate the technology.
Vice President Kamala Harris and other executives will meet with leaders from Google, Microsoft, OpenAI, maker of the popular ChatGPT chatbot, and AI start-up Anthropic to discuss the technology.
The White House proposed that companies have a responsibility to address the risks of new AI development. “We aim to have an open discussion about the risks we each see in current and near-term AI development, measures to mitigate those risks, and other ways we can work together to ensure the American people benefit from AI’s advances while being protected from AI’s harmful effects,” The New Yorker said. Aarti Prabhakar, the White House director of science and technology policy, said in an invitation to the meeting obtained by The Times.
Hours before the meeting, the White House announced that the National Science Foundation plans to spend $140 million on new research centers dedicated to AI. and security,” several AI companies agreed to make their products available for review at a cybersecurity conference in August.
The White House is under increasing pressure for police AI capable of crafting sophisticated text and lifelike images. An explosion of interest in the technology began last year when OpenAI released SatGBT to the public, and people immediately began using it to search for information, do schoolwork, and help them with their jobs. Since then, some big tech companies have accelerated AI research by incorporating chatbots into their products, while venture capitalists have poured money into AI start-ups.
But the AI boom has also raised questions about how the technology will change the economy, shake up geopolitics and improve crime. Critics have worried that many AI systems are opaque but too powerful, with the potential to make biased decisions, displace people in their jobs, spread misinformation and break the law themselves.
President Biden said recently It “remains to be seen” whether AI is dangerous, and some of his top appointees have pledged to intervene if the technology is used in harmful ways.
Spokesmen for Google, Microsoft and OpenAI declined to comment ahead of the White House meeting. A spokesperson for Anthropic confirmed that the company will attend.
The announcements build on earlier attempts by the administration to put safeguards on AI, and last year, the White House released an AI bill of rights that said automated systems must protect users’ data privacy, shielding them from discriminatory consequences. and clarify why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in AI development, which has been in the works for several years.
The introduction of chatbots such as ChatGPT and Google’s Bard has put enormous pressure on governments to act. The European Union, which was already negotiating regulations for AI, faced new demands to regulate the broader scope of AI, instead of systems deemed inherently higher risk.
In the United States, members of Congress, Including Senator Chuck Schumer New York’s majority leader has moved to draft or propose legislation to regulate AI, but concrete steps to curb the technology in the country may first come from law enforcement agencies in Washington.
A group of government agencies pledged in April to “monitor the development and use of automated systems and encourage responsible innovation,” while punishing violations of the law using the technology.
In a guest article in The Times on Wednesday, Lina Khan, the head of the Federal Trade Commission, said the nation is at a “tipping point” with AI, comparing recent advances in the technology to the birth of tech giants like Google and Facebook. , he warned, without proper regulation, the technology could entrench the power of the biggest tech companies and give fraudsters a powerful tool.
“As the use of AI becomes more widespread, public authorities have a responsibility to ensure that this hard-learned history does not repeat itself,” he said.