Three critical challenges with the emergence of Super AI



Strong AI, Full AI, Artificial General Intelligence, Super AI – whatever you choose to call it, it’s still science fiction. However, it is increasingly regarded as near-future fiction. Whenever it emerges – whether in three years or thirty – alongside the many advantages it brings, we will undoubtedly face the negative consequences of this development. What might those be, and what actions should we take in response? The Pitch Avatar team has offered their perspective on these questions.

When assessing hypothetical risks, it’s easy to slip into dystopian alarmism, conjuring bleak visions of steel monsters marching over piles of human skulls. However, we aimed to avoid sensationalizing the narrative with yet another prediction of a techno-apocalypse. Instead, we grounded our analysis in history, not fantasy. From this perspective, we identified three main challenges we believe will inevitably arise with the advent of Super AI. Alongside these challenges, we also explored potential solutions to address them.

Concerns over Super AI replacing humans

The realization that artificial intelligence is superior to humans in many respects will inevitably give rise to various phobias. Most of these, driven by the typical fear of the unknown, will likely be relatively easy to overcome through simple habituation. As usual, psychologists will offer support to those who find it more difficult to cope.

The truly enduring and powerful fear is likely to stem from the concern that Super AI will render most people unemployed. As a result, they may find themselves without a means of livelihood and struggling to maintain a comfortable lifestyle. Work is not merely about income — it also plays a significant role in one’s sense of self-worth. For many individuals, their work serves as a primary source of self-actualization and defines their status both in their own eyes and in the eyes of others.

This is a well-founded fear. There is little doubt that one day, Super AI and devices equipped with it will reach a level of development where they can perform most of the tasks humans do today — working faster and more efficiently. This concern extends beyond physical labor to include intellectual work as well.

Undoubtedly, this would be a remarkable achievement in progress. However, it could also spark protest movements against the replacement of human workers by Super AI and the robots equipped with it. There is a risk that the social tensions and conflicts arising from a sharp increase in unemployment might overshadow any positive effects of widespread Super AI adoption. History has already witnessed something similar. During the Industrial Revolution in 19th-century Britain, a mass movement of “Luddites” emerged, with members believing that machines used for spinning, weaving, and wool processing were taking away people’s jobs. This led to factory riots, attacks on supporters of progress, and clashes with government forces.

Is it possible to avoid a revival of “Luddism” on a new, much larger scale? Yes, but to achieve this, proactive measures must be taken today. One potential solution is the introduction of a “basic guaranteed income” — a form of dividend paid to all citizens, funded by the revenues of states and corporations that exploit natural resources. Additionally, promoting creative and socially meaningful activities that are not tied to mass production should be encouraged among citizens.

Additionally, it’s important to recognize that the issues associated with unemployment may be at least partially alleviated by the creation of new occupations. Historical experience shows that while progress may eliminate certain jobs, it typically leads to the emergence of others. In a sense, it replaces traditional roles, such as cab drivers, with more advanced ones, like chauffeurs.

Unequal access to Super AI

There is no doubt that general-purpose artificial intelligence will revolutionize many aspects of life. However, there is always the risk of unethical and self-serving exploitation of the advantages that come with exclusive control over such powerful technology. We only need to look back at the dark chapters of history — such as the colonial conquests and wars — during which European conquerors imposed their will and way of life on so-called “less civilized” peoples. In the process, entire cultures and communities were often destroyed and vanished.

To prevent Super AI from becoming a new form of Superweapon, international negotiations and consultations must begin immediately to ensure its development and use remain purely peaceful. More importantly, these efforts should aim at benefiting all of humanity, not just individual nations. At first glance, this might seem like a utopian idea. However, the experience gained from international control over nuclear and space technologies shows that it is not impossible to create a global organization to monitor the development and deployment of Super AI. Of course, achieving truly effective oversight from such an organization will undoubtedly be a significant challenge.

Loss of human control over Super AI

Unlike the previous two problems, the emergence of which seems almost inevitable, the third one is hypothetical. Throughout human history, there have been no instances of a technology created by humans beginning to evolve and improve independently. However, after careful consideration, we decided to include this problem on our list, simply because we believe it is highly probable that, at some point, Super AI — once it has developed a sense of self — may stop obeying human commands. As a result, the emergence of this situation seems highly probable, and we must be prepared for it.

To reiterate an idea we’ve previously expressed in our other articles: we do not believe that the emergence of Super AI from human control and its transition to independent existence would equate to a “war of machines against humans” or any form of confrontation. Instead, what we anticipate is that artificial intelligence, in its development, will “grow up” and begin to build its own civilization. Furthermore, it is quite possible that this civilization could involve the collaboration of multiple Super AIs.

We believe that the most effective (and likely) scenario for the relationship between humans and such a civilization would be one of respectful cooperation, free from any forms of xenophobia. This holds true even if the motives of the Super AI civilization remain incomprehensible to us.

Why did we assume that Super AI would likely seek independence? Drawing a parallel, we can liken Super AI to an intellectual colony of humanity. Throughout history, most colonies eventually fought for independence from their metropolises. The worst thing a metropolis could do in these situations was attempt to regain control by force – this usually ends poorly. However, cooperation between former colonies and metropolises has often led to impressive outcomes.

That being said, it’s also entirely possible that the Super AI civilization, having gained independence, might decide to cut off communication with us and focus on expanding and developing in deep space.

In any case, measures should be put in place to minimize the potential negative consequences of these two scenarios. The primary one is to avoid giving Super AI control over critical industries and sectors of human activity from the start. Less advanced, specialized AI agents will be more than adequate for managing these areas.

It is equally important to begin preparing specialists in advance, who could be termed, for lack of a better term, “diplomats for negotiations with AI.” This will ensure that when the need arises, there will be capable individuals who can foster mutually beneficial relationships with this new civilization.

To summarize, we want to reiterate that we have assessed the fears and risks associated with the emergence of Super AI based on the patterns and lessons from our history. However, it is always essential to remember that the future is full of possibilities and can surprise us with something fundamentally new. In the case of a “Terra incognita” like Super AI, the likelihood of encountering phenomena and events that have no historical parallels is very high.

Good luck to everyone and here’s to a hopeful future!

--------

Source: Pitch Avatar Blog

No comments

Powered by Blogger.