paper clips, parrots and safety vs. ethics

0
194
paper clips, parrots and safety vs. ethics

Sam Altman, CEO and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, D.C., U.S., Tuesday, May 16, 2023. With products like ChatGPT on the rise, Congress is debating the potential and pitfalls of AI, the future of the creative industries and the ability to tell fact from fiction.

Eric Lee | Bloomberg | Getty Images

Last week, OpenAI CEO Sam Altman dined with a room of politicians in Washington, D.C., before testifying for nearly three hours at a Senate hearing on the potential risks of artificial intelligence.

After the hearing, he summarized his stance on AI regulation, using terms that are not widely known to the general public.

“AGI safety is very important and cutting edge models should be regulated,” Altman tweeted. “Regulatory capture is bad, we shouldn’t be messing around with sub-threshold models.”

In this context, “AGI” refers to “artificial general intelligence.” As a concept, it was once said to be so much more advanced than currently possible artificial intelligence that it can do most things, even better than most humans, including improving itself.

“Frontier models” is a way of talking about the AI ​​systems that are the most expensive to produce and analyze the most data. Large language models such as OpenAI’s GPT-4 are cutting-edge models compared to small AI models that perform specific tasks such as recognizing cats in photos.

Most agree that laws governing AI will need to be developed as the pace of development picks up.

“Machine learning, deep learning, in the last 10 years or so, it’s evolved very rapidly. When ChatGPT came along, it evolved in ways that we never thought possible, how fast it could develop,” said My Thai, a professor of computer science at the University of Florida. “We worry that we are racing to a more robust system that we don’t yet fully understand and predict what it can do.”

But the language surrounding the debate reveals two camps: academia, politicians and the technology industry. Some people care more about what they call “AI safety.‘Another camp fears what they call’AI ethics.

When Altman speaks to Congress, he largely avoids jargon, but his tweets suggest that he is primarily concerned with AI safety — the company that Altman runs, including OpenAI, Google’s DeepMind, and many of the well-capitalized startups. Industry leaders share this stance. They fear the possibility of building an unfriendly AGI of unimaginable power. This camp argues that we need urgent government attention to regulate development and prevent the premature end of humanity — akin to nuclear nonproliferation efforts.

“It’s great to hear that so many people are starting to take AGI safety seriously,” said Mustafa Suleyman, founder of DeepMind and now CEO of Inflection AI. tweets on Friday. “We need to be ambitious. The Manhattan Project cost 0.4 percent of US GDP. Imagine what an equivalent security program could achieve today.”

But much of the discussion in Congress and the White House on regulation has been through an AI ethics lens, focusing on the harms at hand.

From this perspective, the government should increase the transparency of how AI systems collect and use data, limit its use in areas subject to anti-discrimination laws such as housing or employment, and explain the shortcomings of current AI technology.white house AI bill of rights proposal Beginning late last year, many of these concerns were included.

Representatives of the camp testify before Congress IBM Chief privacy officer Christina Montgomery told lawmakers she believes every company working on these technologies should have an “AI ethics” point of contact.

Montgomery told Congress: “There must be clear guidance on the end uses of AI or the categories of activities that AI can support, which are inherently high-risk.”

How to understand AI jargon like an insider

See also: How to talk about AI like an insider

It’s no surprise that the debate around AI has developed its own jargon. It was originally a technical academic field.

Much of the software discussed today is based on so-called large language models (LLMs), which use graphics processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models first need to be built in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature and often refer to shared references and jokes.

For example, AI security personnel might say they worry about becoming paper clip. This refers to a thought experiment popularized by philosopher Nick Bostrom, which posits that a super-powerful artificial intelligence – a “superintelligence” – could be given the ability to make as many paper clips as possible task, and logically decided to kill humans and use their remains.

OpenAI’s logo was inspired by this story, and the company even made a paper clip in the shape of its logo.

Another concept of artificial intelligence safety is “hard take off” or”take off fastwhich implies that if someone succeeds in building an AGI, it will be too late to save humanity.

Sometimes the idea is described using onomatopoeia—”curse“—especially among the critics of the concept.

“It’s like you believe in the ridiculous hard takeoff ‘foom’ scenario, it sounds like you have zero knowledge of how everything works,” tweets Yann LeCun, head of Meta AI, cast doubt on claims of AGI in a recent social media debate.

AI ethics also has its own jargon.

In describing the limitations of current LLM systems, unable to comprehend meaning but only produce human-like language, AI ethicists often liken them to “random parrot.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper some of the authors wrote while they were at Google, emphasizes that while complex AI models can produce lifelike text, software cannot understand the concepts behind language— — like a parrot.

When these LL.M.s fabricated incorrect facts in their responses, they were “hallucinating

One topic that IBM’s Montgomery highlighted at the hearing was “interpretability“in AI results. This means that this may hide some inherent biases in LLMs when researchers and practitioners are unable to point to the exact quantities and operational pathways that larger AI models use to derive their outputs.

“You have to have explainability for the algorithm,” says Adnan Masood, AI architect at UST-Global. “Before, if you looked at a classical algorithm, it would tell you, ‘Why did I make that decision?’ Now there is Bigger models, they become this giant model, they’re a black box.”

Another important term is “fencewhich includes the software and policies that big tech companies currently build around artificial intelligence models to ensure they don’t leak data or generate disturbing content, often referred to as “Cheating.

It can also refer to specific applications that protect AI software from going off topic, such as Nvidia’s “Nemo Barrier” product.

“Our AI Ethics Committee plays a key role in overseeing internal AI governance processes, putting in place sensible guardrails to ensure we introduce technology into the world responsibly and safely,” Montgomery said this week.

Sometimes these terms can have multiple meanings, such as “emergency behavior

A recent Microsoft Research paper titled “The Spark of General Artificial Intelligence” claims to have identified several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a graphical programming language.

But it can also describe what happens when simple changes occur on very large scales—as in birds flying in groupsor, in the case of AI, what happens when ChatGPT and similar products are used by millions, such as widespread spam or disinformation.

BCA Research: AI has a 50/50 chance of wiping out all of humanity

LEAVE A REPLY

Please enter your comment!
Please enter your name here