OpenAI removes AI safety executive Aleksander Madry from role, reassigns him to AI reasoning

0
7
OpenAI removes AI safety executive Aleksander Madry from role, reassigns him to AI reasoning

OpenAI last week fired Aleksander Madry, one of OpenAI's senior security directors, from his position and reassigned him to work focused on artificial intelligence inference, people familiar with the matter confirmed to CNBC.

Madry is the head of OpenAI's readiness efforts, a team “responsible for tracking, assessing, predicting and helping to prevent catastrophic risks associated with cutting-edge artificial intelligence models.” biology For Madeley.

OpenAI told CNBC that Madry will still be engaged in core artificial intelligence security work in his new role.

Madry, who also serves as director of MIT’s Center for Deployable Machine Learning and faculty co-lead of the MIT Artificial Intelligence Policy Forum, is currently on leave, according to a message from MIT. website.

Less than a week before the decision to reassign Madry, a group of Democratic senators sent a letter to OpenAI CEO Sam Altman asking “how OpenAI addresses emerging security issues.”

this letterThe article, sent on Monday and viewed by CNBC, also said: “We sought more information from OpenAI about the steps the company is taking to meet its public security commitments, how the company internally evaluates its progress against those commitments, and the Company identification and mitigation of cybersecurity threats.

OpenAI did not immediately respond to a request for comment.

Lawmakers have given OpenAI until August 13 to provide a series of answers to specific questions about its security practices and financial commitments.

Security concerns and controversy have grown this summer around OpenAI, which along with Google, Microsoft, Meta and others has led a generative artificial intelligence arms race — a market that is expected to Up to $1 trillion Revenues in a Decade – Companies in seemingly every industry are scrambling to add AI-powered chatbots and agents to avoid being left behind by competitors.

Earlier this month, Microsoft gave up its observer seat on OpenAI's board of directors and said in a letter seen by CNBC that it can now resign because it is satisfied with the structure of the startup's board since an uprising. Within eight months, the board had been restructured.

But last month, a group of current and former OpenAI employees published an article open envelope Describes concerns about the rapid growth of the artificial intelligence industry despite a lack of oversight and whistleblower protections for those willing to speak out.

“AI companies have strong financial incentives to avoid effective oversight, and we believe customized corporate governance structures are insufficient to change this,” employees wrote at the time.

Days after the letter was published, a source familiar with the situation confirmed to CNBC that the Federal Trade Commission and the Department of Justice will launch antitrust investigations into OpenAI, Microsoft, and Nvidia, focusing on the conduct of these companies.

FTC Chairman Lina Khan described her agency's action as “a market investigation into emerging investments and partnerships between artificial intelligence developers and major cloud service providers.”

Current and former employees wrote in the June letter that AI companies have “a wealth of non-public information” about what their technology can do, the extent of security measures they take and the harm the technology poses to different types of risk levels.

“We also understand the serious risks posed by these technologies,” they wrote, adding that the companies “currently have only a tenuous obligation to share some of this information with governments and none with civil society. We do not believe they All reliable” shared voluntarily. “

In May, OpenAI decided to disband its team focused on the long-term risks of artificial intelligence, a year after it was announced, a person familiar with the matter confirmed to CNBC at the time.

The person, who spoke on condition of anonymity, said some team members are being reassigned to other teams within the company.

The team was led by OpenAI co-founders Ilya Sutskever and Jan Leike, who disbanded in May after announcing they were leaving the startup. Leike wrote in a post on X that OpenAI's “safety culture and processes have given way to shiny products.”

CEO Sam Altman says At the time, he was sad to see Lake leave and said the company had more work to do. Soon after, OpenAI co-founder Greg Brockman release A statement from Brockman and Altman on

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike write on X then. “However, for quite some time I had been at odds with OpenAI leadership regarding the company's core priorities, until we finally reached a breaking point.”

Lake wrote that he believes the company's bandwidth should focus more on security, surveillance, preparedness, safety and social impact.

“These problems are difficult to solve, and I worry we won't be able to achieve this goal,” he wrote. “My team has been sailing against the wind over the past few months. At times we have struggled with[computing resources]It is becoming increasingly difficult to complete this important research.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building machines that are smarter than humans is inherently dangerous work,” he wrote at the time. “OpenAI has a huge responsibility on behalf of all of humanity. But over the past few years, safety culture and processes have taken a back seat to shiny products.”

Information first report Regarding Madeley’s transfer.

LEAVE A REPLY

Please enter your comment!
Please enter your name here