Surely, every reader knows the Old Testament story of the Tower of Babel. Here are a few key verses from the 11th chapter of Genesis, which begins with the story:
But the Lord came down to see the city and the tower the people were building. The Lord said, “If as one people speaking the same language they have begun to do this, then nothing they plan to do will be impossible for them. Come, let us go down and confuse their language so they will not understand each other.”
Recently, when I came across this passage, it deeply moved me and made me think not so much of the past but …. of the times to come. Could the widespread use of various translators and communicators lead us to a world of “people speaking the same language” and will the development of artificial intelligence (AI) bring humanity to a situation where “nothing will be impossible from them”? This perspective seems incredibly intriguing, considering the potential good that could come from it. We could solve all the problems that plague us. So why did the Lord decide to “put an end” to this human-made project? Before I share my vision of a possible answer, let me provide some introductory thoughts to ponder on.
The dust has settled a bit, sceptical opinions are gaining traction.
The hype around Chat GPT is gradually fading away. Not because it turned out to be some fake, but because the solution itself quickly became a permanent part of our reality and, in turn, became commonplace. It will probably be similar to what happened with the iPhone in the past, which initially caused a transient shock, but today, the release of new models excites only technology enthusiasts. Importantly, this doesn’t mean that the iPhone didn’t change reality—it certainly did. In the case of Chat GPT, its impact will be much greater (I wrote about it recently in “GPT^2 – a real revolution is coming!“).
At the same time, alongside the widespread enthusiasm, some voices have emerged, going against the tide and being critical in evaluating the supposed breakthrough that Chat GPT and similar technologies are meant to represent. Their main arguments relate to the fact that the current form of AI does not have as much potential as it is commonly presented. They argue that it is primarily mathematics, and the generated texts make sense to us, but the “response generator” itself does not comprehend that sense. It merely replicates patterns and relationships of words (those it has been taught), but it cannot connect them to what they mean in the real world. From this perspective, the hype around Chat GPT and similar solutions is exaggerated and intentionally created to attract capital for further development.
Importantly, these voices have some valid points. What Chat GPT and similar technologies represent is only a transitional, and perhaps only preliminary form of what could be called artificial intelligence. It is a form that does not solve the fundamental problem of knowledge representation (e.g., so that AI would know what a soccer ball is, not just with which other words it appears in sentences) but rather tries to bypass it through statistics and immense computational power. Such an approach sets certain limits on the development of current solutions.
However, with an awareness of this problem, the creators of solutions referred to as AI will quickly redirect their attention (and resources) to more advanced structures, especially since science rapidly provides us with new knowledge in this area (in this context, I recommend J. Hawkins’ book ” A Thousand Brains: A New Theory of Intelligence“).
After a few months since the emergence of Chat GPT, voices containing various kinds of cautions and warnings have become more pronounced. These concerns are not new; for example, in 2018, Pew Research published very interesting research titled “Artificial Intelligence and the future of humans“, Today, these warnings focus on aspects such as:
- Eventually, AI may dominate us, especially since even with its relatively simple forms, we don’t understand how it works, such as how Chat GPT arrives at the responses it provides. It remains a kind of black box for us. This has certain consequences. One of them is that AI-based solutions can do more than their creators initially expected.
- So far, we haven’t established virtually any barriers for the expansion of AI and its learning. It’s true that existing solutions have built-in filters to censor unwanted content (and therefore responses), but, in general, we’ve gone all-in. Going all-in means giving AI access to almost all possible knowledge resources created by humanity so far, covering every possible domain, including sensitive information related to human psychology, behavioral patterns, and more. As a result, in a relatively short time, AI will “know” practically everything about humanity, understanding our strengths and weaknesses, our strengths and vulnerabilities, our emotional dependencies.
- We don’t know how to meet the so-called alignment issue, which is the necessity to find a way to align human goals with AI’s goals. This is a difficult task, and it applies on multiple levels. First of all, even among humans, we have difficulty in aligning common actions and goals, especially concerning global challenges. Secondly, even if we agree on something at the human level, we don’t really know how to effectively “implant” it into AI – not fully understanding how it works makes it difficult to predict its behavior, even in the context of well-defined goal functions. Thirdly, today, AI’s knowledge reflects almost the entire spectrum of our behavior, ranging from acts of love and empathy to greed and hatred. Why would artificial intelligence “trained” on these patterns behave differently than we do? How could we ensure that it would be more ethical than its teachers?
Human, the weak link?
In the context of what I wrote above, what is important for us humans is what AI is today (how Chat GPT and similar technologies will impact, for example, the job market), but even more crucial is what it will be in the perspective of 10, 20, or 50 years. Particularly, what we allow it to do when it can autonomously define the goals of its actions (the so-called strong artificial intelligence or artificial general intelligence – AGI). In the backdrop of how AI is developing, we, as humans, are evolving very, very slowly. It seems that the only chance for our survival lies in finding a formula for coexistence. This cannot be achieved merely on the narrow plane of intelligence (where AI may have an advantage over us); solutions must be sought more broadly. As the advent of AI will radically change the rules of the game – of which we are still uncertain – the primary strategy must be adaptation. Humanity can adopt several variations of it. Firstly, it may choose to let go in its own way, saying, “whatever will be, will be.” The current situation somewhat indicates that we have opted for this option.
The second option is a kind of acknowledgment of the ultimate dominance of AI and the search for paths that would allow humans to fit into the world of technology and artificial intelligence, finding justification for their role and, thus, their existence. This includes visions of a cyber-human – a hybrid of technology and human, part-human, part-robot.
There is yet another option, that of a human who elevates – or better utilizes – their own potential, to remain a partner to artificial intelligence rather than being dominated by it or forced to assimilate into it. The starting point for this scenario does not seem optimal. The way we unleashed artificial intelligence, ignoring all the “safety rules” defined by ourselves, seems to be an expression of a more significant issue related to the main motivations that determine our actions as humans. They are often dominated by pride and greed.
Are we mature enough?
At this point, it’s time to return to the question posed at the beginning: why did the Lord not allow “everything to become possible for humans”? Assuming that the Bible, besides its religious dimension, also contains universal truths about humanity, one can conclude that the Lord wanted to protect humans from something. What could that be? Perhaps from their immaturity, where pride dominates over humility. Although words like “omnipotence” or “intelligence” usually have positive connotations, in reality, they are neutral in themselves—they can lead to both good (resulting from humility) and bad (resulting from pride) actions.
Are we currently mature enough, especially in the context of artificial intelligence? Personally, looking at what is happening beyond our eastern borders, I have my doubts. At the same time, I hope that we have not spoken the last word on this matter. Above all, we must make good use of the time left before more advanced forms of AI emerge. To make it happen, we must dedicate no less attention to the development of humanity than we do to the advancement of artificial intelligence.