01-05-2023
12 minutes

Don’t be like artificial intelligence!

Fourteen years ago, on a long May weekend in 2009, my adventure with writing a blog began. The May weekend of 2023 was supposed to end its history – I decided that in times of “wisdom” generated on a mass scale by tools like Chat GPT, it was high time to give up. This was supposed to be the last text, almost entirely generated by artificial intelligence, under the working title “What does AI want to tell us?”. However, things turned out differently, and during the writing process, I convinced myself that it was still worth it. That there are important issues that Chat GPT doesn’t necessarily have an understanding of. As a result, a text was created that, although it draws on AI knowledge in some parts, is actually very personal in nature.

—————————————————————————————————————————

Artificial intelligence (AI) will undoubtedly change our lives, it is already doing so. We don’t yet know how, but it will undoubtedly be a revolutionary change. Techno-optimists will say that thanks to AI, life will be much better, that it will solve the challenges and problems that concern us, and allow us to work shorter and more efficiently. Techno-pessimists, on the other hand, have already begun the countdown to the end of human civilization – in their view, we have let the genie out of the bottle without knowing how to handle it, how to “make wishes” that would create a better reality for us, and not (perhaps unintentionally) a world of ultimate destruction.

In this text, I would like to focus on issues related to the connections between artificial intelligence and ethics and values. It is a text in which I am not so much trying to provide answers – although certain hypotheses are contained within it – but rather seeking the right questions, the right direction for an important debate about the interactions between AI and humans. In my opinion, the answer to the question of what to do in order for humanity to benefit from artificial intelligence does not lie in the sphere of its improvement and the invention of new areas of possible applications, but in the sphere of how we will shape the relationships between us and AI.

The starting point for my considerations is the thesis that with the emergence of tools like GPT Chat, there will be a strong temptation to seek answers to the questions and issues that concern us humans to an even greater extent in the world of science and technology. To turn artificial intelligence into a “deity” (some even use the term “techno-religion”) and completely abandon everything that falls outside of reason and rationality, relating to intuition, feeling, and faith. Meanwhile, in my personal belief, if we want to ensure the survival of human civilization, even its unprecedented flourishing, we must do something entirely different – we must open ourselves up like never before to wisdom that goes beyond rationality, whose source is “transrational understanding”. To make this happen, we need to reach the fullness of our being.

The human brain and AI

The creators of artificial intelligence solutions have modeled their designs based on the principles of how the human brain works. Of course, AI is not the same as the human brain, but there are several significant similarities that can be observed.

Firstly, both the human brain and most models of artificial intelligence learn from experience. In the human brain, this occurs through the reinforcement of connections between neurons, while in the case of neural network-based AI, learning involves updating connection weights to minimize error.

Secondly, both the human brain and AI are excellent at recognizing patterns and generalizing. Humans are able to quickly identify and learn new patterns in their surrounding environment. Similarly, AI processes vast amounts of data to detect hidden patterns within it.

Another common feature is the ability to adapt to changing conditions and situations. The human brain has a plasticity that allows changes in the structure and functioning of neurons in response to new experiences. Similarly, AI can adjust its algorithms to find better answers/solutions in changing situations.

Therefore, it can be said that AI is a tool that can mimic some key aspects of the functioning of the human brain. In practice, this means that it can also be susceptible to the same weaknesses that our brains exhibit.

Imperfections of the human brain

There is no need to convince anyone of the benefits of using our brain. However, we rarely focus on the negative effects that result from the principles on which our brain functions.

Firstly, these principles cause our brain to have a tendency to create prejudices and stereotypes that help simplify and organize information about the surrounding world. Although these cognitive shortcuts can be useful in some situations, they often lead to unjustified and unfair judgments about other individuals or groups, which in turn leads to discrimination and marginalization.

The second phenomenon related to the way our brain works is its natural tendency towards conformity and imitation of the behavior of others (the so-called herd effect). This can lead to the spread of negative social phenomena, such as violence, discrimination, or disinformation, as people (usually) prefer to succumb to the influence of their surroundings rather than critically evaluate information and make their own independent decisions.

Due to the nature of its “recordings,” the human brain also has a tendency to pay more attention to negative emotions and experiences, rather than positive ones. This can lead to the strengthening of negative social phenomena, the growth of feelings of fear, distrust, or hatred towards others.

The human brain is also susceptible to cognitive errors, leading to the development of false beliefs, prejudices, and disinformation. This is reinforced by the so-called confirmation bias – our brain more easily “digests” information that confirms existing beliefs, even if they are untrue. At the level of attitudes, this translates into a preference for what is familiar, routine actions, and patterns of behavior, which often closes the door to change.

Another significant aspect of our brain’s functioning is that in situations of instability, loss of control, or growing anxiety, it has a tendency to regress and rely more on its more primitive parts (including the so-called reptilian brain), which are focused on fighting for survival, even at the expense of others. We live in times where this phenomenon may be increasingly observed.

SI – compounded flaws

Since the principles of operation of SI and the human brain are similar, it is easy to imagine that using the former can – just like using the latter – lead to unfavorable phenomena, but in a compounded scale! For example, in the case of tools like GPT Chat, this can particularly apply to:

  • Perpetuating prejudice and discrimination – if the information and data “fueling” SI systems contain biases, they can be entrenched and reinforced by algorithms. For example, recruitment systems based on AI can discriminate against candidates based on their gender, race, or origin if they learn from historical data that contains such prejudices.
  • Disinformation and manipulation – AI can be used to generate and disseminate false information and disinformation, which can lead to undermining social trust, worsening public debate, and increasing social divisions. An example of this is deepfakes – generating false but realistic-looking images and videos that can be used for manipulation and fraud.
  • Surveillance and loss of privacy – SI can lead to increased surveillance and violation of citizens’ privacy, both by governments and corporations. This is not just a theoretical issue, but a widespread application of AI in China – facial recognition systems, behavior analysis, and user profiling based on their activity are used for monitoring and controlling society.
  • Further social polarization – following the human tendency to close oneself in “echo chambers,” SI can lead to limiting the diversity of opinions, thus contributing to deepening social divisions.
    In addition to these, SI can compound the negative effects of the already visible growing addiction to technology, such as: i) erosion of social relationships, ii) decreased ability for independent thinking and problem-solving, iii) threats to mental health (resulting from the erosion of social relationships, loss of self-worth, etc.).

Intelligent suggestion from artificial intelligence

If we ask Chat GPT how to avoid these problems, its answer seems “well thought out”:

Technology itself is neither good nor bad. How it is used depends on human values, goals, and intentions. Therefore, the key to the positive impact of AI on society is a responsible and ethical approach to it by creators, users, and regulators. The impact of AI on ethics and values can be considered from three main aspects:

  • The consequences of decisions made by AI on people – this is about developing algorithms that are fair and honest, and creating systems that are transparent and understandable to users
  • Responsibility for actions and decisions made based on AI – this requires determining in legal regulations and ethical codes who is responsible for actions/decisions made by AI.
  • Honesty – in the context of AI, it refers to equal treatment and opportunities for everyone, regardless of their origin, gender, age, or other characteristics“.

That’s what Chat GPT says. It all seems to make sense and sound “rational”. But it’s …..a trap! All of this is based on the assumption that solutions to the challenges facing humanity, which are a natural derivative of “our thinking”, can be … “even more of our thinking”, this time compounded by the rules of artificial intelligence established by humans. I have no doubt that this is a blind alley!

I have repeatedly quoted Albert Einstein’s statement on my blog that “We cannot solve our problems with the same thinking we used when we created them.” Yet, that’s exactly where everything is headed: an “imperfect” version of a human, supported by artificial intelligence based on its “imperfect” ideas (what else are the knowledge bases that feed AI?), is to be our salvation. Artificial intelligence can help solve this or that single problem (and praise be to it for that), but it will not help us break free from the wrong course.

Moreover, with regard to ethical codes for AI, the key question today is: who would create them? We are widely contaminated with the “systemic” imperfections of our brain, and the vast majority of us are completely unaware of them! Moreover, as Zygmunt Bauman correctly diagnoses in “Postmodern Ethics” we live in times of a moral crisis of postmodernity. Religious revelations of the principles of life in the form of sets of commandments have been quite widely rejected. They tried to replace it with philosophical ethics and associated moral codes, but we are now witnessing the breakdown of the illusion that a universal system accepted worldwide can be created based on them. One of the important effects of this is that any authority can now be questioned. Either we don’t trust at all, or we don’t trust fully, or if we do, it’s only for a while.

Fundamental Misunderstanding

This leads us to the identification of what I believe is the fundamental problem, which is that our civilization is built on Descartes’ statement: “I think, therefore I am.” As a result, it puts the brain and its “creativity” on a pedestal. However, this statement has nothing to do with reality! Thinking is only a part of our being, and not the most important one. This is known to everyone who can “turn off the nagging voice” that leads most people to have a constant monologue in their heads.

The main change must relate to the fact that we will stop identifying ourselves with the brain/mind itself and become users of the brain! Just as AI is just a tool and not a being, the same is true of our minds. Only then can we see that our mind – the knowledge accumulated in it – is on the one hand something amazing and creating new possibilities (in relation to other earthly beings), but also, considering its principles of functioning, something that strongly limits us.

Only by functioning based on the fullness of our being, our humanity, will we be able to direct the development of AI in such a way as to help us create a better world, ensuring its survival. When we become fully human, changing the world for the better, protecting nature, and considering ethical aspects will become our internal need, not an external command or compulsion. Also, the AI that reflects us will become more human thanks to this.

How can we achieve the fullness of our being? Through self-discovery, working on ourselves, which includes integrating different elements of ourselves (subconscious and rational), paying attention to aspects related to feeling. All of this will lead to finding oneself and opening up to the superconscious. The latter comes from something much greater than us, known in different cultures and religions as God, Self, Source, Absolute, Consciousness, etc. In each case, it is identified with Higher Intelligence. Intelligence that knew (because it created them) the laws of classical mechanics before Newton discovered them for us humans. Intelligence that easily understands the laws of quantum mechanics that we humans have been trying to unravel for over 100 years. Intelligence that knows the principles and laws that we will only discover in 100 or 200 years.

As humanity we are somehow trapped in a race between SELF-DISCOVERY and SELF-DESTRUCTION. The dilemmas related to the functioning of AI fit very well into this race: if we do not focus on self-discovery, reaching the essence of our being, then AI will accelerate our self-destruction. So, do not be like artificial intelligence, become your fullness by reaching the essence of yourself!

Leave a Reply

Your email address will not be published. Required fields are marked *