The right response to AI is more mundane than existential dread

0
140
The right response to AI is more mundane than existential dread

This article is a live version of Martin Sandbu’s Free Lunch newsletter.Sign up here Newsletter delivered directly to your inbox every Thursday

When ChatGPT and other instances of AI software were revealed to the unsuspecting public a few months ago, a wave of surprise ensued. With that comes a plethora of concerns about where the dizzying development of software capabilities will take human society — and that includes people very close to the action.

Last month, AI investor Ian Hogarth insisted in the Financial Times’ weekend magazine that “we must slow down the race to god-like AI”.A few weeks later, Geoffrey Hinton, dubbed the “godfather” of artificial intelligence, left Google so he could express his concerns freely, including in Interviewed by The New York Times. Professor and AI Entrepreneur gary marcus worries About “what bad actors can do with this stuff”. Just today, the Financial Times interviewed AI pioneer Yoshua Bengio, who fears AI will “undermine democracy.” Meanwhile, a host of AI investors and experts have called for a “pause” in further development of the technology.

Call me naive, but I found myself unable to soak up the excitement. Not because I doubt that AI will change the way we live, and especially the structure of our economy – of course, it will. (Check out this list for the many ways people Already using artificial intelligence.) but because it’s hard for me to see how even the worst-case scenarios that experts warn us against are inherently different from the big problems humans have already managed to create and have to try to fix on their own.

Take Hogarth’s AI chatbot driving someone to suicide. 18th century, reading Goethe’s this The Sorrows of Young Werther It is said to have the same effect. Whatever conclusions we should draw, AI poses no existential danger.

Or take Hinton, whose “biggest fear is that the internet will be so flooded with fake photos, videos and words that ordinary people will ‘no longer be able to know what’s real'”. Not being able to see the truth is a fear that all of the aforementioned thinkers seem to share. But lying and manipulation, especially in our democratic processes, are problems humans are perfectly capable of causing without the need for artificial intelligence. For example, a quick glance at some of the views held by large segments of the American public shows (to put it mildly) that compromised access to the truth is nothing new. And, of course, the ability of generative AI to create deepfakes means we will have to become more critical of what we see; unscrupulous politicians will use deepfake allegations to refute damaging revelations about them. But, again, in 2017, Donald Trump did not need the presence of artificial intelligence to be able to refute accusations of “fake news” to his critics.

So I think the air of existential horror that the latest AI breakthrough has created is a distraction. Instead, we should be thinking on a more mundane level. Marcus makes a good analogy to building codes and standards for electrical installations, rather than trying to slow down technology development itself – that’s where policy discussions should be.

Policymakers, especially economic policymakers, should address two particularly serious (because they are the most actionable) problems.

The first is who should be held accountable for the decisions made by AI algorithms. It should be easy to accept the principle that we should not allow AI to make decisions that we would not (or would not want to allow) if they were made by human decision makers. Of course, we’re terrible at this: we let corporate structures get away with what individuals don’t allow. But with AI in its infancy, we have an opportunity to eliminate the potential impunity for actual human beings based on the “AI did it” defense in the first place. (By the way, this argument is not limited to artificial intelligence: we should treat nonintelligent computer algorithms in the same way.)

This approach encourages legislative and regulatory efforts not to get mired in the technology itself, but to focus on its specific uses and attendant harms. In most cases, it doesn’t matter whether the harm was caused by an AI decision or a human decision; what matters is that harmful decisions are suppressed and punished.daniel dennett exaggerated in atlantic magazine said AI’s ability to create “fake digital humans has the potential to destroy our civilization”. But he makes a good point that if executives at tech companies developing artificial intelligence could face jail time for their technology being used to facilitate fraud, they would quickly ensure that the software contained signatures to easily detect whether we Communicating with artificial intelligence.

The AI ​​Act being legislated by the EU appears to be doing the right thing: identifying specific uses of AI that will be banned, restricted or regulated; imposing transparency on when AI is used; ensuring that rules that apply elsewhere also apply to AI Use, for example, the copyright of artwork that can train AI; and make it clear where the responsibility lies, for example, with the developers of the AI ​​algorithms or their users.

The second big question that policymakers should be concerned about is what are the distributional consequences of the productivity gains that AI should ultimately bring. Much of this will depend on intellectual property, which ultimately depends on who controls access to the technology (and can charge for that access).

Because we don’t know how AI will be used, it’s hard to know how many valuable uses will be controlled and monetized. It is therefore useful to think in terms of two extremes. On the one hand is the completely proprietary world, where the most useful AI will be the intellectual property of the companies that create the AI ​​technology. Due to the vast resources devoted to creating usable AI, these are few at best. As an effective monopoly or oligopoly, they will be able to charge high licensing fees and reap the benefits of productivity gains from AI.

At the other extreme is the open-source world, where AI technology requires little investment to run, so any attempt to restrict access will only prompt the creation of a free open-source competitor.if leaked author Google’s “We Don’t Have a Moat” Memo Yes, the open source world is what we care about.Rebecca Gorman Aligned AI, made the same argument in a letter to the Financial Times. In that world, anyone with the intelligence or motivation to deploy AI will reap the productivity gains from AI — tech companies will see their products commoditized and priced down by competition.

I don’t think it’s possible to know which extreme we’re going to be closer to right now, for the simple reason that it’s impossible to imagine how AI will be used, and therefore to imagine exactly what technologies will be required. But I will make two comments.

One is to look at the Internet: its protocols are designed to be accessible to all, and the language is of course open source. However, that hasn’t stopped big tech companies from trying, and often successfully, to create “walled gardens” with their products, and thereby extract economic rent. Therefore, we should rather worry about the concentration of economic power and rewards that the AI ​​revolution will lead to.

Second, where we end up is partly the result of the policy choices we make today. To promote an open source world, governments can pass legislation to increase transparency and access to technologies developed by tech companies, effectively turning proprietary into open source. Among the tools worth considering—especially for mature technologies, AI instances rapidly adopted by large companies or users—are mandatory licensing (at a stated price) and the requirement to release source code.

After all, the big data on which any successful AI training is based is generated by all of us. The public has strong demands for the fruits of its data labor.

other reading

  • Tobias Gehrke and Julian Ringhof of the European Council on Foreign Relations in a important analysis How the EU must update its thinking on strategic trade policy.

  • The digital euro project is making steady progress but still needs to win broad public support.

  • European Commission is set up register The cost of Russia’s attack on Ukraine. As a formal multilateral initiative, it should make it easier for Russia to take financial responsibility for the damage it has wrought, including eventual seizure of its assets.

  • EU’s new joint gas procurement platform better than expected in the first tender.

number news

Britain after Brexit — Get the latest updates as the UK economy adjusts to life outside the EU.Sign up here

Trade secrets — A must read on the changing face of international trade and globalization.Sign up here

LEAVE A REPLY

Please enter your comment!
Please enter your name here