EU wants Google, Facebook to start labeling AI-generated content – POLITICO

0
124


BRUSSELS — In a bid to clamp down on disinformation online, the European Commission wants tech companies like Google, Facebook and TikTok to start labeling content created by artificial intelligence without waiting for digital laws to come into effect.

Text, video and audio created and manipulated by artificial intelligence tools like ChatGPT and DALL-E have been increasingly spreading online. The Commission is now calling on dozens of large tech companies that are part of its voluntary anti-disinformation charter to make it easier for people to distinguish facts from fiction.

“Signatories who have services with a potential to disseminate AI-generated disinformation should in turn put in place technology to recognize such content and clearly label this to users,” Vice President for Values and Transparency Věra Jourová said Monday, as previously reported in Brussels Playbook.

Very large online platforms and search engines like Meta, Twitter and TikTok will have to identify generated or manipulated images, audio and video known as deep fakes with “prominent markings” as soon as August 25 under the Digital Services Act (DSA) or face sweeping multimillion-euro fines. Meanwhile, the European Parliament is pushing for a similar rule to apply to all companies generating AI content, including text, as part of the Artificial Intelligence Act, which could come into force as soon as 2025.

Jourová also wants companies like Microsoft and Google to build safeguards for their services, including Bard and Bingchat, so that bad actors can’t use so-called generative AI for harm. She said Google’s CEO Sundar Pichai told her his company was currently developing such technologies.

She added that the 44 participants of the Code of Practice on Disinformation — including social media companies, fact-checking groups and advertising associations — will start a new group to discuss how to best respond to the new technologies.

Jourová also slammed Twitter for leaving the voluntary code just a few months before the DSA comes into force.

“We believe this is a mistake from Twitter,” she said. “They chose confrontation, which was noticed very much in the Commission.”

Participants of the code will have to release reports in mid-July with detailed analyses about how they’ve stopped falsehoods from spreading on their networks and their plans to limit potential misinformation from generative AI.

Jakob Hanke Vela and Mark Scott contributed reporting.



LEAVE A REPLY

Please enter your comment!
Please enter your name here