November 5, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

Integrity of science: “The human share is declining significantly.”

Integrity of science: “The human share is declining significantly.”

On the trail of the undeclared use of artificial intelligence and fake facts

“At the text level, detecting forgery is much more difficult than detecting image, audio or video data. This is because texts have very low dimensions: for example, a full high-resolution image can contain more than 2 million pixels, while The sentence contains “This application contains less than 100 characters. Therefore, recognizing AI-generated texts is a particular challenge,” said Nicholas Müller of the Department of Cognitive Security Technologies at the Fraunhofer Institute for Applied and Integrated Security (AISEC) for Research and Teaching.Know everything“(7.3.) by Hessischer Rundfunk Show how easy and fast it is to create a fake post using AI.

“Detecting counterfeits at the text level is much more difficult than detecting image, audio or video data.”
doctor. Nicola Müller, Fraunhofer Institute for Applied and Integrated Security

Müller finds it crucial that AI is currently being used in research and teaching in a very unstructured way and without sufficient background knowledge about how it works: “AI can, for example, provide support in spelling and phrasing, but also in translation or transcription. However, what is worrying is the trend that homework or seminar papers, for example, are sometimes written by schoolchildren or students using ChatGPT, which undermines learning success.It is also possible for generative AI to “hallucinate”, i.e. He freely invents facts and truths.

According to Mueller, “supervised machine learning” can use pattern recognition to help identify fake content. But spotting “deepfakes” — that is, images, audio recordings or videos created with the help of artificial intelligence (AI) — is an ongoing race between those who create them and those who recognize them, Mueller explains to Research & Teaching. “Similar to a virus scanner, the attacker attempts to outmaneuver the defender using increasingly sophisticated methods. The defense side, in turn, seeks to identify phishing attacks more effectively. Deepfake detection is done through the use of 'supervised machine learning techniques': AI learns what What's real and what's not is based on numerous examples of real and AI-generated content.

See also  The all-new BMW iDrive system comes in the new Series.

Weßels sees the problem of using AI in process chains in which the AI-generated component is evaluated again by the AI ​​in a subsequent process step. This challenge can already be felt today when dealing with term papers and coursework: “If we have a closed process chain for using AI that initially starts with the production of ‘designed’ texts powered by AI, and optimized for subsequent texts.” Evaluation criteria, and if teachers end up evaluating the text using artificial intelligence based on exactly these criteria, the question arises whether we really want this or whether it reduces the entire process chain to absurdity. My plea is to “choose a holistic view of the chain of operations when considering these considerations and then have the courage to fundamentally question them.” She considers AI detectors to be unreliable in the field of textual work and advocates new testing methods and transparent labeling requirements for written work.

New concepts of quality assurance and evaluation

Sabel, himself an editor of a specialist scientific publication, declares to Research and Teaching that fundamental adjustments must be made in the scientific community in order to sustainably reduce fake publications: “First of all, it would be desirable for the ‘publishing-culture’ to be either Doom, the “publish or bust” culture, will itself disappear and be replaced by modern reward systems. In addition, production and distribution of counterfeit publications should be standardized as a criminal offence. This action must be supplemented by the development of counterfeit detection methods, which also include the use of visual artificial intelligence, and regulation or quality inspection of scientific publishers and companies by some independent testing body, i.e. TÜV Science. Illegal, they would have to move to the dark web, which is bad for their scientific clients and probably too risky.

“First of all, it would be desirable for the ‘publish or perish’ culture itself to disappear and be replaced by modern reward systems.”
Bernhard Sabel, neuropsychologist and editor-in-chief of a scientific journal

Professor Wessels used what she called the “3P Evaluation System” with her students: “The 3P System is an extension of the usual process of evaluating written works, such as coursework, and stands for the triad of product, presentation, and process.” This approach is based on the assumption that in the age of artificial intelligence, comprehensive course evaluation must take into account not only the final product – the written work itself – but also the process that led to that product, as well as the presentation. Action including defending results.

See also  Samsung Launches Blue Glove Wall Mounting Service - Samsung UK Newsroom

“The 'process' evaluation component is actually new. The focus there is on how the product is created,” explains Wessels, explaining the procedure. Both scientific and technical design will be evaluated. “In technical design, students must demonstrate their digital competence in using IT and artificial intelligence solutions,” Wessels continued.

AI university guidelines are under development and change

Little by little, more and more universities are publishing their own AI guidelines, which contain internal recommendations for work and specifications. “Released in February”Guidelines for dealing with generative artificial intelligence” of the “Hochschulforum Digitalisierung” (HFD) summarizes the state of regulations at German universities and offers further advice for further development. According to the guidelines, constructive approaches, blind spots, patterns and differences can already be identified among the 27 submissions.

“To this day, there are many German universities that have no regulations or inconsistent regulations on how to use AI tools, which causes concern for students and teachers alike,” Wessels summarizes her knowledge and experiences from the dialogue in “Research and Teaching.” Institutions. Responsibility often shifts from university administration to college administration regarding 'freedom of research and teaching', and from there to course directors or module administrators.

“To this day, there are many German universities that have no regulations or inconsistent regulations on how to use AI tools.”
Doris Wessels, Professor of Business Informatics at Kiel University of Applied Sciences

As part of the Bavarian Bid study on the spread and acceptance of generative AI in schools and universities, it was determined that 54% of schoolchildren and 41% of students do not have any guidelines at the educational institution. 44% of students and 57% of students wanted the same thing. 47% of students also support more controlled exam formats.

See also  Introducing controlled features - New features in Windows 11 through CFR technology

“We are losing more and more time and are facing the risk of a digital divide in university circles between those who are digitally savvy and those who did not like working with digital tools before they became worse,” Wessels said. “I see the risk of the 'digital divide' as a risk not only socially, but also in institutions. From my point of view, establishing a connection here again is a management task. The first and most important measure is comprehensive AI qualification measures for teachers and learners” in order to reduce fears of Communication, strengthening the organization’s ability to discuss the use of artificial intelligence and training in the efficient use of the tools.”

HFD's “Guidelines for Generative AI” also summarize, somewhat matter-of-factly, that many AI guidelines are still in development or are already out of date. There are still a lot of open questions and only a few legal points. 'Construction site inspections' are a topic in most universities' guidelines. According to the HFD, a third of the guidelines address relevant skills and qualifications: “Critical thinking is mentioned most often here. This is followed by concepts such as AI literacy. Less common are skills that become more important precisely because they are not fully understood.” AI tools can be replaced (such as creativity, decision-making ability and teamwork). Application-related competencies such as interaction with AI technology and agile engineering are included. Publications from TU Hamburg, Goethe University Frankfurt, TU Dortmund, and Lovana Lüneburg as good practice.