How do you check if information is real and trustworthy – especially information posted online or social media?
The ability to manipulate videos and images with the help of artificial intelligence (AI) makes clear answers more difficult. researchers Karlsruhe Institute of Technology (KIT) They dealt with the potential risks of deepfake technology on behalf of the European Parliament and developed options for better regulation. Together with partners from the Netherlands, the Czech Republic and Germany, they formally presented the results of their study to the members of the European Parliament.
Deepfakes are increasingly realistic images, audios or videos where people are put into new contexts with the help of artificial intelligence techniques or words put into their mouths that were never said that way. “We are dealing with a new generation of digitally manipulated media content that has been cheaper and easier to create for several years now and, above all, can look deceptively real,” Doctor. Jutta Yahnell, who works at the Institute for Technology Assessment and Systems Analysis (ITAS) at KIT It is concerned with the social dimension of learning systems. Technology certainly opens up new possibilities for artists, for digital visualizations in schools or museums, and aids in medical research.
At the same time, however, deepfakes carry significant risks, as the international study now submitted to the European Parliament’s STOA (Scientific Technology Options Assessment) panel shows. “Technology can be misused to spread false news and disinformation very effectively,” said Jahnell, who coordinated ITAS’s contribution to the study. For example, forged audio documents can be used to influence or discredit legal proceedings and ultimately threaten the judicial system. It would also be possible, for example, not only to harm politics personally through a fake video, but also to influence her party’s chances of being elected and, ultimately, to destroy trust in democratic institutions as a whole.
Critical handling of media content
Researchers from Germany, the Netherlands and the Czech Republic suggest specific solutions. Because of rapid technological progress, one should not be limited to regulations for technology development. “In order to be able to manipulate public opinion, counterfeits must not only be produced, but also distributed above all,” Yahnel explains. “When it comes to the rules for dealing with deepfakes, we should start first with internet platforms and media companies.” However, AI-powered techniques for deepfakes will not be able to completely eliminate them in this way. On the contrary, researchers are convinced that individuals and societies will encounter more and more visual misinformation in the future. It is therefore imperative that you be more critical of such content in the future and that we develop more skills that help critically question the credibility of media content. On the German side, in addition to ITAS, the Fraunhofer Institute for Systems Research and Innovation contributed to the study, in the Netherlands as project coordinator at the Rathenau Institute and in the Czech Republic the CAS Technology Center.
Further empirical study at KIT on social responses to deepfakes
Building on the European study, an interdisciplinary project at KIT is currently studying what effective social responses to deepfakes might look like. In addition to technology assessment, experts from computer science, communications, law as well as qualitative social research at KIT work together. The goal is to combine knowledge and approaches from different disciplines. In particular, the pilot study aims to examine the users’ perspective more closely.
“Certified tv guru. Reader. Professional writer. Avid introvert. Extreme pop culture buff.”