Bill Gates explains why we shouldn’t be afraid of A.I.

0
29
Bill Gates explains why we shouldn’t be afraid of A.I.

American philanthropist Bill Gates speaks during the Global Fund’s seventh replenishment meeting in New York, September 21, 2022.

Mandel Yan | AFP | Getty Images

Microsoft Co-founder Bill Gates is a strong believer in the potential of artificial intelligence, often reiterating his belief that models like the ChatGPT core model are the most important technological advances since the personal computer.

He said the emergence of the technology could lead to problems such as deepfakes, algorithmic bias and cheating in schools, but he predicted that the problems caused by the technology could be solved.

“One thing that’s clear from all the writing on the risks of AI to date — and a lot has been written — no one has all the answers,” Gates wrote in a blog post. this week. “The other thing that is clear to me is that the future of artificial intelligence is not as grim as some people think, nor as rosy as others think it is.”

Gates’ middle-of-the-road view of the risks of artificial intelligence could shift the debate around the technology from a doomsday scenario to more limited regulation of current risks, just as governments around the world grapple with how to regulate the technology and its potential flaws. On Tuesday, for example, senators received a classified briefing on artificial intelligence and the military.

Gates is one of the most prominent voices in the field of artificial intelligence and its regulation. He also maintains close ties with Microsoft, which invested in OpenAI and integrated its ChatGPT into core products including Office.

In the blog post, Gates cites society’s responses to previous advances as proof that humans have adapted to major changes in the past, and they will do the same for artificial intelligence.

“For example, it will have a major impact on education, as did the hand-held calculator decades ago, and more recently, the introduction of computers into the classroom,” Gates wrote.

The regulation the technology needs is “speed limits and seat belts,” Gates said.

“Shortly after the first cars hit the road, the first crashes happened. But we didn’t ban cars — we adopted speed limits, safety standards, licensing requirements, drunk driving laws, and other rules of the road,” Gates wrote.

Gates is concerned about some of the challenges posed by the adoption of the technology, including how it could change people’s jobs, and “illusion,” or the propensity of models like ChatGPT to invent facts, documents and people.

For example, he mentioned the problem of deepfakes, which use artificial intelligence models to make it easy for people to create fake videos impersonating others, and which could be used to deceive people or give clues to elections, he wrote.

But he also suspects that people will be better at identifying deepfakes, citing deepfake detectors being developed by Intel and government funder DARPA. He proposed a regulation that clearly outlines what kind of deepfakes are legal.

He is also concerned about the ability of AI-generated code to search for software vulnerabilities needed to hack computers, and has suggested creating a global watchdog modeled after the International Atomic Energy Agency.

LEAVE A REPLY

Please enter your comment!
Please enter your name here