May 9, 2024

TechNewsInsight

Technology/Tech News – Get all the latest news on Technology, Gadgets with reviews, prices, features, highlights and specificatio

“We call on all technology laboratories to immediately stop developing artificial intelligence systems.”

“We call on all technology laboratories to immediately stop developing artificial intelligence systems.”

More than 1000 characters so far gravity It was signed to boycott giant AI experiments around AI systems. The reason: As long as no one—not even the creators—understands the machines, the stakes are so high that systems will spin out of control. We verbatim and translate the call. editorial office.

AI systems with intelligence equal to that of humans can pose serious risks to society and humanity, as extensive research shows and is recognized by leading AI labs.

As described in the widely adopted Asilomar AI Principles, advanced artificial intelligence (artificial intelligence) can represent a profound change in the history of life on Earth and must be planned and managed with the necessary care and resources.

Unfortunately, no such planning and management is happening, although over the past few months AI Labs have been in a wild race to develop and deploy ever more powerful digital minds that no one – not even their creators – understands, predicts or can reliably control. .

Today’s AI systems are able to compete with humans in general tasks, and we have to ask ourselves: Should we allow machines to flood our information channels with propaganda and lies? Should we automate all jobs, including fulfilling jobs? Should we develop a non-human intelligence that can outnumber us, outmaneuver us, obliterate us, and replace us? Should we risk losing control of our civilization?

Such decisions should not be delegated to unelected technology leaders. Strong AI systems should only be developed when we are sure that their effects are positive and that their risks are manageable. This trust must be well-founded and grow with the potential influence of the regime.

See also  The Garchinger Technologie- und Gründerzentrum portal celebrates its 20th Anniversary, The Garchinger Technologie- und Gründerzentrum portal, press release

OpenAI’s latest statement on artificial general intelligence states: “At some point, it may be important to seek independent validation before starting to train future systems, and for more advanced efforts to agree on a growth rate for creation to limit the data used in new models.”

We agree. Now is the time.

Therefore, we require all AI labs to immediately stop training AI systems more powerful than GPT-4 for at least six months. This discontinuation must be public and verifiable and include all key stakeholders. If such a moratorium cannot be implemented quickly, governments must step in and impose a moratorium.

AI Labs and independent experts should use this pause to collaboratively develop and implement a common set of security protocols for the design and development of advanced AI, which will be rigorously reviewed and monitored by independent external experts. These protocols must ensure that the systems they adhere to are unequivocally secure. This does not mean a general halt in the development of artificial intelligence, but simply a departure from the dangerous race to larger and unpredictable black box models with emerging capabilities.

AI research and development should focus on making today’s powerful, modern systems more accurate, secure, explainable, transparent, robust, precise, reliable, and loyal.

In parallel, AI developers must collaborate with policymakers to accelerate the development of strong AI governance systems.

At a minimum, these should include: new and capable regulators dedicated to AI; Monitor and track high-performance AI systems and large pools of computing power; source systems and watermarks that help distinguish between real and synthetic data and track model leaks; Strong testing and certification system; liability for harm caused by artificial intelligence; strong public funding for technical AI security research; and well-resourced institutions to deal with the dramatic economic and political disruptions that AI will wreak (especially to democracy).

See also  Children and youth expect more technological competence as a subject of study

Humanity can enjoy a prosperous future with artificial intelligence. Now that we’ve been able to create powerful AI systems, we can now enjoy the “AI summer” in which we reap the fruits, develop these systems for the obvious benefit of everyone, and give society a chance to adapt.

Other technologies with potentially catastrophic effects on society, society has taken a break, we can do the same here. Let’s enjoy a long AI summer and don’t jump into fall unprepared.