We need to keep CEOs away from AI regulation

0
119
We need to keep CEOs away from AI regulation

The writer is director of international policy at Stanford University’s Cyber ​​Policy Center and serves as special advisor to Margaret Vestager

Tech companies recognize that the race for AI dominance depends not only on the market, but also on Washington and Brussels. The rules governing the development and integration of their AI products will have existential implications for them, but are currently up in the air. So executives are trying to get a head start and set the tone, arguing that they are best placed to oversee the technology they produce. AI may be novelty, but the topics are circular: they are the same as Mark Zuckerberg for social media and Sam Bankman-Fried for cryptocurrencies. Such statements should not again distract democratic lawmakers.

Imagine the JPMorgan CEO explaining to Congress that because financial products are too complex for lawmakers to understand, banks should decide for themselves how to prevent money laundering, enable fraud detection, and set liquidity-to-lending ratios. He would be laughed out of the room. Angry voters will point out how self-regulation has fared in the global financial crisis. From Big Tobacco to Big Oil, we have learned the hard way that corporations cannot make fair regulations. They are neither independent nor capable of creating countervailing forces for themselves.

Somehow, this fundamental truth has been lost when it comes to artificial intelligence.Lawmakers eager to hear from companies and want their guidance on regulation; Sen. even ask OpenAI CEO Sam Altman will name potential industry leaders to oversee a putative national AI regulator.

In industry circles, the calls for AI regulation are nearing the end of the world. Scientists warn their creation is so powerful it could spin out of control. a recent letter, signed by Altman et al., warns that AI poses a threat to human survival akin to nuclear war. You’d think those fears would drive executives to act, but despite signings, few have changed their behavior. Maybe their vision of how we think about AI guardrails is the real goal. Our ability to address questions about the type of regulation needed is also largely influenced by our understanding of the technology itself.These statements focus attention on artificial intelligence There is a riskBut critics argue that prioritizing preventing this overshadows much-needed work against discrimination and bigotry that should be being done today.

Warnings about the catastrophic risks of AI supported by people who can stop pushing their products onto society are bewildering. The open letter left signatories looking powerless amid desperate appeals. But those sounding the alarm already have the power to slow or pause the development of potentially dangerous AI.

Former Google CEO Eric Schmidt insisted that only companies were capable of developing guardrails, while governments lacked the expertise. But neither are lawmakers and administrators experts in farming, fighting crime, or prescribing drugs, all of which they regulate. They certainly shouldn’t be discouraged by the complexity of AI — if anything, this should encourage them to take responsibility. Schmidt inadvertently reminds us of the first challenge: breaking the monopoly on access to proprietary information. With independent research, realistic risk assessments and guidance on enforcement of existing regulations, the debate on whether new measures are needed will be based on facts.

Executive actions speak louder than words. Just days after Sam Altman welcomed AI regulation in congressional testimony, he threatened to cancel OpenAI’s European operations because of it. When he realized EU regulators didn’t take the threat well, he switched back to a charm offensive, promising to open offices in Europe.

Legislators must remember that businessmen are primarily concerned with profit rather than social impact. Now is the time to move beyond the joy and define concrete goals and approaches for AI regulation. Policymakers must not allow tech CEOs to shape and control the narrative, let alone the process.

A decade of technological disruption has highlighted the importance of independent oversight. This principle is even more important when power in technologies such as artificial intelligence is concentrated in a few companies. We should listen to those in power, but never take their word for it. Their grand claims and ambitions should prompt regulators and lawmakers to act according to their own expertise: the expertise of the democratic process.

LEAVE A REPLY

Please enter your comment!
Please enter your name here