Angela Müller of Algorithm Watch called it a “valuable, charismatic signal” that the Council of Europe has recognized that rules on dealing with AI are essential to protecting human rights. But the contract leaves a “bitter aftertaste” because it does not do justice to these goals. “Although there is no shortage of evidence about how technology companies influence the formation of public opinion through social media algorithms or deepfake generators, the Council of Europe leaves it up to countries whether they want to adopt soft measures or binding laws,” Müller told the German news agency DPA. (DPA): “It trusts that pure self-regulation by companies will be sufficient to protect human rights and democracy.”
Data Protection Officer: Red lines are missing
The European Data Protection Supervisor had already warned in the final stage of negotiations in March that the agreement might become a “missed opportunity.” The main criticism was that the draft lacked red lines for some AI applications. There are concerns that the agreement is too broad and will therefore be applied differently.
In the coming years, Germany must now implement the EU AI law and the Council of Europe AI Convention and integrate them into national law. “The federal government can, at least in part, overcome the failures of the Council of Europe and the EU by banning some AI applications, such as facial recognition in public spaces,” says Algorithm Watch's Müller.
“Certified tv guru. Reader. Professional writer. Avid introvert. Extreme pop culture buff.”
More Stories
Pitch: €56m for energy startup Reverion
Plastoplan: Plastics for Energy Transition
Canon Launches Arizona 1300 Series with FLXflow Technology