OpenAI chief set to call for greater regulation of artificial intelligence

0
152
OpenAI chief set to call for greater regulation of artificial intelligence

OpenAI CEO Sam Altman will tell U.S. lawmakers on Tuesday that artificial intelligence regulation must allow companies to flexibly adapt to new technological developments as the industry faces increasing scrutiny from regulators around the world.

Altman, whose company created the AI ​​chatbot ChatGPT, will testify before Congress for the first time on Tuesday that “regulation of AI is essential.”

His comments come as regulators and governments around the world intensify scrutiny of the fast-moving technology amid growing concerns about its potential abuse.

According to prepared remarks released ahead of the hearing, Altman will tell the Senate Privacy, Technology and Legal Judiciary Subcommittee that he is “eager to help policymakers determine how to promote regulation that balances incentive security while ensuring people have access to technology.” benefit”.

EU lawmakers last week agreed a tough set of rules on the use of artificial intelligence, including restrictions on chatbots such as ChatGPT, as Brussels pushes for the world’s strictest regime for technological development.

Earlier this month, both the U.S. Federal Trade Commission and Britain’s competition watchdog issued warnings to the industry. The US Federal Trade Commission said it was “extremely concerned about how companies choose to use AI technologies”, while the UK’s Competition and Markets Authority plans to review the AI ​​market.

Altman’s testimony will recommend that AI companies adhere to “an appropriate set of security requirements, including internal and external testing prior to release” as well as licensing or registration conditions for AI models.

However, he will warn against this by emphasizing the safety requirements that “AI companies must (should) have governance regimes flexible enough to accommodate new technological developments”.

The rapid development of generative artificial intelligence that can produce convincingly human-like writing has raised alarm among some AI ethicists over the past six months.

In March, Elon Musk and more than 1,000 technical researchers and executives signed a letter calling for a six-month moratorium on training AI language models more powerful than GPT-4, which OpenAI uses to The underlying technology for its chatbots. Earlier this month, artificial intelligence pioneer Geoffrey Hinton left Google after a decade at the tech giant to speak freely about the risks of the technology, which he warned would Widen social divisions and potentially be exploited by bad actors.

“Artificial intelligence will be transformative in ways we could never have imagined, impacting American elections, jobs and security,” Republican Senator Josh Hawley said in a statement ahead of the hearing.

Christina Montgomery, IBM’s vice president and chief privacy and trust officer, and Gary Marcus, a professor emeritus at New York University, will also testify Tuesday.

“Artificial intelligence urgently needs rules and safeguards to address its enormous promise and pitfalls,” Senator Richard Blumenthal, D-Calif., chairman of the committee, said in a statement.

“This hearing begins our subcommittee’s work in overseeing and illuminating AI’s advanced algorithms and powerful techniques. . . as we explore sensible standards and principles to help us navigate this uncharted territory,” he added.

Additional reporting by Madhumita Murgia

LEAVE A REPLY

Please enter your comment!
Please enter your name here