April 15, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

AI Navigator #1: Artificial intelligence is much more than just technology

AI Navigator #1: Artificial intelligence is much more than just technology

Welcome to the first edition of the DOAG KI community's AI Navigator column!

advertisement

Oliver Szymanski is Head of Artificial Intelligence at DOAG eV. As a qualified computer scientist, a founding member of iJUG eV and with extensive experience as an IT consultant – from software engineering to low-level coding – he has worked on both projects. A name for himself in both artistic and community work. As the person in charge of technology at the IT Systems House of the Federal Employment Agency, he helps develop innovative solutions and advance the technology landscape. He enjoys spending his free time with his huskies, writing novels, and finding time at night to code — or talk to artificial intelligence.

The more intensively I work with AI, the more I learn about myself, my fellow humans, and huskies (some ignorant people say dogs, but anyone who knows huskies would certainly not confuse this).

In this emerging age of artificial intelligence, the real challenges are less technical than self-awareness. The advancement of artificial intelligence provides not only a deep insight into our thinking and decision-making processes, but also the dynamics of our relationships with our fellow humans and even with our animal companions – in my case, huskies.

The DOAG AI Community (KIC) was founded in 2023 as part of the German Oracle User Community (DOAG) with the aim of creating a platform for exchanging knowledge, experiences and best practices in the field of artificial intelligence (AI). DOAG is specifically aimed at developers and deals with all the important topics in software development such as cloud native, infrastructure, and software architecture. The association organizes several events, including CloudLand and, in cooperation with the German-speaking Java User Group interest group iJUG, the JavaLand conferences. KIC provides a platform for discussions and collaboration and organizes events on current AI topics.

As part of KIC, the AI ​​Navigator Conference was launched, which successfully debuted in November 2023 in collaboration with de'ge'pol and Heise. This conference focuses on the practical application of artificial intelligence in the fields of information technology, business and society and brings together experts from these different disciplines.

As the KI Board of DOAG, I am proud to launch this platform with our committed members and partners, which is establishing itself as a link between the fascinating worlds of AI, IT, business and society.

Following the success of the AI ​​Navigator Conference, KIC now opens another chapter with the introduction of the AI ​​Navigator column. In an ongoing series, experts from the world of AI share ideas, perspectives and innovations. We discuss how AI is sustainably shaping our daily lives, economy and society and invite you to join us on this fascinating journey through the new era of AI to explore reality and the future.

LLM (Large Language Model) training is in some ways similar to training huskies over the years, imprinting them into action and reaction and thus enhancing their confidence in their instincts or, in other areas, overcoming instincts. Training basically shapes neurons. This teaches them that it is good to relieve yourself outside and bad to do so inside.

In addition, there is long-term memory, which stores the places where delicious treats are hidden in the kitchen and the times when Huskies usually receive their food. Likewise, the LLM is provided with data through training and also has access to long-term memory based on a wealth of information, usually in the form of a vector store.

There is also short-term memory, for example the knowledge you just brought from the kitchen. Likewise, LLM processes the current context of the conversation to generate situational and timely responses.

You can also experience how LLM works with inquiry in a dialogue between two people. Depending on the other person's words, we try to understand the context. If it refers to what was spoken directly before (short-term memory), we will address it. Otherwise, we search for it in long-term memory and react based on the information we find. The reaction varies depending on the connections of our neurons. People can have different reactions to the same facts.

You can also experience how training, memory, and context work together when you say “sit” to your husky and hope for a reaction. At home, knowing that there is a large stock of sweets in the kitchen and that I have something in my pocket, the Husky complies with suggestion (suggestions and commands are often interpreted differently). If there are other conditions and distractions in the woods, the Husky will act more critically toward people and make a countersuggestion.

Since LLMs also react to a variety of influences, they should not be confused with deterministic algorithms. Similar to huskies, they can provide different responses to changing environmental factors. Neurons and long-term and short-term memory play a major role. In addition, there is a hyperparameter of temperature, which allows the AI ​​model more or less freedom. Neurons in the LLM behave similarly to connections in the brain. They are shaped by data and training.

A common requirement in the context of AI is that we need to understand precisely why the model made a particular decision. But does it really help us to know which neurons fired or didn't fire?

The same applies to our decisions. I often find plausible reasons for a sudden craving for chocolate, which may not always be the real reason. Perhaps we should focus more on understanding the data on which our decisions are based, while accepting the fact that AI can make “human” errors as well. Beyond deterministic algorithms, AI can produce false results just like a human. We must check whether the success rate is sufficient, how we can verify and improve the results and whether the overview makes positive contributions.

As in other areas, insurance will play a role in this. If the risks are lower, the insurance coverage can be higher if there is no AI-based application.

The real challenge facing our society is to understand these dynamics, rather than prematurely calling for destructive regulation. Only through deep understanding can we jointly create AI applications that are not only useful, but also socially beneficial. Our greatest task to master lies in this area of ​​tension.


(Roman)

To the home page

See also  Canyon forms a technology partnership | velobiz.de