Europe takes aim at ChatGPT with landmark regulation

0
172
Europe takes aim at ChatGPT with landmark regulation

Private companies have had to develop AI technologies at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.

Lionel Wunder | AFP | Getty Images

A key committee of lawmakers in the European Parliament has approved an unprecedented artificial intelligence regulation – bringing it closer to becoming law.

The approval marks a milestone between authorities in the race to develop artificial intelligence at breakneck speed. Known as the European Artificial Intelligence Act, the law is the first in the West to target AI systems. China has drawn up draft rules aimed at governing how companies develop generative AI products such as ChatGPT.

The law takes a risk-based approach to regulating AI, where the obligations of a system are proportional to the level of risk it poses.

The rules also set out requirements for providers of so-called “base models” such as ChatGPT, which has become a major concern for regulators given how advanced they have become to displacing even skilled workers.

What do the rules say?

The Artificial Intelligence Law divides the application of artificial intelligence into four risk levels: unacceptable risk, high risk, limited risk and minimal risk or no risk.

Unacceptably risky applications are blocked by default and cannot be deployed within the group.

They include:

  • AI systems use subliminal techniques, or techniques of manipulation or deception to distort behavior
  • AI systems exploit weaknesses of individuals or specific groups
  • Biometric classification system based on sensitive attributes or characteristics
  • AI systems for social scoring or assessing trustworthiness
  • Artificial intelligence systems for risk assessment to predict criminal or administrative violations
  • AI system creates or expands facial recognition database through untargeted grabbing
  • AI systems infer sentiment in law enforcement, border management, workplaces and education

Several lawmakers have called for raising the cost of these measures to ensure they cover ChatGPT.

To this end, requirements are placed on “base models”, such as large language models and generative AI.

Developers of underlying models will be required to apply security checks, data governance measures, and risk mitigations before making their models public.

They will also be required to ensure that the training data used to inform their systems does not violate copyright law.

“Providers of such AI models will be required to take steps to assess and mitigate risks to fundamental rights, health and safety, and the environment, democracy and the rule of law,” said Ceyhun Pehlivan, a lawyer at Linklaters and co-head of the law firm in Madrid. Telecommunications, Media and Technology and Intellectual Property Practice Group told CNBC.

“They will also be subject to data governance requirements, such as checking data sources for suitability and possible bias.”

It needs to be emphasized that while lawmakers in the European Parliament have passed the law, it is still some way away from becoming law.

why now?

Tech Industry Response

The rules have raised concerns in the tech industry.

The Computer and Communications Industry Association said it was concerned the AI ​​Act had broadened its scope so much that it could catch harmless forms of AI.

Boniface de Champris, CCIA’s European policy manager, said: “It is worrying that a large number of useful AI applications – which pose very limited or no risks – are now Will face strict requirements and may even be banned in Europe,” told CNBC via email.

“The European Commission’s original proposal for the AI ​​Bill took a risk-based approach, regulating specific AI systems that posed clear risks,” added de Champris.

“Members of the European Parliament have now introduced various amendments that change the nature of the AI ​​Act, which now assumes that very broad categories of AI are inherently dangerous.”

what the experts say

Dessi Savova, continental European head of the technology group at law firm Clifford Chance, said the EU rules would set a “global standard” for AI regulation. However, other jurisdictions, including China, the US and the UK, are rapidly developing their own responses, he added.

“The long arm of the proposed AI rules essentially means that AI players in every corner of the world need to care,” Savova told CNBC via email.

“The right question is whether the AI ​​Act will set a single standard for AI. China, the US and the UK, among others, are developing their own approaches to AI policy and regulation. Admittedly, they will all be watching the AI ​​Act negotiations to customize their own approach.”

Savova added that Parliament’s latest draft AI bill would codify into law AI ethics principles that many organizations have been pushing.

Sarah Chander, senior policy advisor at European Digital Rights, a Brussels-based digital rights campaign group, said the law would require underlying models like ChatGPT to “be subject to testing, documentation and transparency requirements”.

“While these transparency requirements won’t eliminate the infrastructure and economics of developing these massive AI systems, it does require technology companies to disclose the computing power required to develop them,” Chandler told CNBC.

“There are currently several initiatives around the world to regulate generative AI, such as in China and the United States,” Pehlivan said.

“However, the EU’s AI Act is likely to play a key role in the development of such legislative initiatives globally and lead to the EU once again becoming a standard setter on the international stage, similar to the Circumstances Protection Regulation in relation to general data”.

LEAVE A REPLY

Please enter your comment!
Please enter your name here