The bloc is weighing new restrictions for online material generated with artificial intelligence FILE PHOTO: EU Vice President for Values and Transparency Vera Jourova addresses the plenary at the European Parliament in Brussels, March 25, 2021. © AP / Yves Herman
The European Union has called for new measures forcing big tech firms to clearly label any content generated with artificial intelligence, hoping to build on prior legislation related to manipulated videos, audio and photos, also known as ‘deep fakes.’
The EU’s vice president for values and transparency, Vera Jourova, advocated for stepped-up AI restrictions during a press briefing on Monday, arguing technology companies that have integrated artificial intelligence must “build in necessary safeguards” to prevent abuses by “malicious actors.”
“Signatories who have services with a potential to disseminate AI-generated disinformation should in turn put in place technology to recognize such content and clearly label this to users,” she said, citing services offered by Microsoft and Google by name.
While EU lawmakers are now working to pass the Artificial Intelligence Act, which could impose new rules on all companies helping to create content with AI, another regulation with similar provisions has already been adopted. Passed last year, the Digital Services Act will soon force major search engines to identify any AI-manipulated material with “prominent markings,” a move aimed at cracking down on misinformation online.
Jourova went on to announce that 44 signatories on the EU’s 2022 Code of Practice on Disinformation will form a new association to consider how to address emerging technologies like AI. The code’s participants include a number of social media platforms and other tech firms, among them Google, Meta, Microsoft, TikTok, Twitch and Vimeo. Though Twitter previously took part, the company recently stepped away from the project, according to Politico, a decision slammed by the EU official.
“We believe this is a mistake from Twitter. They chose confrontation, which was noticed very much in the Commission,” Jourova continued, adding that the Elon Musk-owned platform should expect greater scrutiny from regulators.
The EU has repeatedly declared its misgivings about AI as programs like ChatGPT and DALL-E quickly rose to prominence in recent years, with tools capable of creating highly realistic fakes now easily accessible for millions of netizens. The body has called for “tailor-made regimes” for services like OpenAI’s ChatGPT, and is now debating amendments to strengthen the Artificial Intelligence Act before it comes up for a general vote, including a classification scheme to label “high-risk” AI tools. (RT)