May 2, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

How Badische Zeitung wants to use artificial intelligence – and how not – about us

How Badische Zeitung wants to use artificial intelligence – and how not – about us

Badische Zeitung has rules for the use of artificial intelligence. Because in addition to opportunities, technology also carries risks. As before, people are responsible for each line of text.

In the fall of 2022, the American company OpenAI launched its chatbot ChatGPT, sparking massive interest in generative artificial intelligence (AI). This refers to computer programs that can create content such as text, images, videos, and music based on people's request. The initial hype around this topic may have died down a bit now. At the same time, it has become clear that technology is not going away any time soon. Generative AI is poised to bring sustainable change to the world of work, and therefore journalism as well.

In 2023, at Badische Zeitung, we not only report on AI and its impacts on society, economy, education and culture, but also collect our own experiences behind the scenes. In one editorial AI lab, about 15 colleagues tested whether and how generative AI could be used in daily editorial work. The potential is great. When writing an essay alone, technology can help in a variety of areas, from brainstorming ideas, drafting outlines and titles, or suggestions for improvements to final texts.

Generative AI offers not only opportunities for journalism, but also risks. We have discussed these issues internally and they will continue to be an issue as technology advances. The culmination of our discussions so far are the guidelines we have given ourselves for using generative AI in the BZ editorial team. It aims to help us take advantage of the opportunities provided by technology while reducing risks. But they should also give you insight into how BZ, in print and digital, approaches generative AI.

See also  t3n - Digital Pioneers | The magazine for digital business

Ethical principles remain unchanged

We didn't have to reinvent the wheel for these guidelines. Ethical principles such as the Press Code do apply to journalists. They are committed to facts and honest reporting. Generative AI does not change that, on the contrary. Journalistic prowess and reliable information gain value when AI texts can be generated in a few seconds – but not everything about them is true.

The problem: Current language models are error-prone and have no intelligence in the human sense. They don't know anything themselves, but they know from a huge amount of training data the probability that one word will follow another in a given context. This is why inhumane attitudes and prejudices can also find their way into systems while training models.

AI content must be reviewed by humans before publication

In order to avoid publishing incorrect AI content, relevant materials must currently be pre-screened by trained journalists. They're also the ones who continue to be responsible for their content — even if they use AI tools to create it. Items that largely consist of AI-generated material should also be clearly labeled.

In principle, AI-generated graphics should be categorized – there is one case that we rule out, regardless of the state of the art: we do not publish AI-generated graphics in the style of real photos or movies. Exceptions to this only apply if the AI-generated material itself is the subject of reports because it has been published elsewhere – such as the fake pope in a white tunic.

See also  OM owner Frank McCourt's campaign against Gafa

Because what you see in Badische Zeitung must correspond to reality, just like our texts.

Apart from that, we are open to technology. But one thing is also clear: it cannot replace journalism. On-site artificial intelligence in South Baden, which reliably searches, listens and classifies information, is on the horizon.

Guidelines for using generative AI in the BZ editorial team

Opportunities and risks

Generative AI represents a huge challenge for us as editors and publishers, just as it is for society as a whole. However, if used correctly, they provide many opportunities for modern, high-quality journalism. At the same time, risks – such as exposure to errors and the potential for manipulation – must be carefully considered.

responsible
Credibility is our greatest asset. According to the Press Law, we are committed to the truth in our reports, preserving human dignity, and truthfully informing the public. We must review information carefully before publishing it and present it truthfully. The potential use of generative AI has no bearing on these principles. Journalists on the BZ editorial team are responsible for compliance. Generative AI supports journalism, but does not replace it.

Control and transparency
If we publish articles containing AI-generated material, it is only after verification by trained staff and with clear labeling.

Photos and videos
We do not publish AI-generated images that could be confused with real photos or videos. Unless you have already posted elsewhere and trigger a report. We do not edit photos or videos that highly depict reality using AI photo editing. We only use illustrations, animations or other artificial representations generated by artificial intelligence in exceptional cases and with appropriate labeling.

See also  Raise3D announces Hyper FFF technology for its Pro2 Series 3D printers

Data protection and laws
We do not enter non-public information and data, such as confidential source names, trade secrets or passwords, into cloud AI systems. All generative AI systems used by the BZ editorial team must comply with applicable legal requirements.

More about the topic: