Today artificial intelligence is applied in many cases where human beings have to make complex decisions. And where errors can cause serious damage to people (such as for self-driving cars).
It is therefore
necessary to ask whether Artificial Intelligence
is indeed an advantage for Man.
Or if it could represent a problem instead.
In other words, it is necessary to ask
WHETHER A “MACHINE” (A COMPUTER)
CAN ACTUALLY MAKE CORRECT DECISIONS.
In this article, following the indications of Science, and carefully analyzing the problems created so far by the forms of AI, it can be deduced that the latter is not able to provide the results enunciated by its supporters.
Two levels of analysis of the question then follow:
a. an analysis of what Science tells us about artificial forms of knowledge; and about elaboration of Models of reality.
b. we proceed with a “mental experiment” (according to the criteria of Science) to verify the case: if Artificial Intellingence existed, could this be managed in a useful way by man?
We are not dealing here with the question of machine learning, or other similar technologies that are actually possible (and desirable). These technologies are in fact totally different from the Artificial Intelligence that is used with the claim to take “autonomous” decisions.
A – What Science tells us about AI
The official science, with three fundamental statements (two of which came from Nobel prizes) tells us that artificial intelligence cannot exist.
1) The “incompleteness theorem” of logical systems (Godel) is one of the statements of official science that explain how Artificial Intelligences cannot exist.
Godel, with an impeccable demonstration (which has become a fundamental Theorem of Science), has shown that any description of reality expressed in mathematical terms (such as that used by AI devices to evaluate reality, and make decisions based on it ) is “incomplete”, ie it does not describe reality as it is.
A corollary of the theorem also states that the more one tries to deepen the description of reality with mathematical formulas, the more one moves away from actual reality.
This – as Godel has shown –
is a problem that
eliminates the possibility of creating “machines”
that can understand the reality that they see.
(this is the explanation of many fatal accidents, such as those of self-driving cars, and Boeing 737 max).
One of Godel’s considerations with respect to his discovery was: “Either mathematics is too big for the human mind, or the human mind is more than a machine”.
2) The indeterminacy principle – also called Uncertainty principle – by Heisember (with some suggestions by Einstein) also tells us that Science is not able to describe (in rational terms) the reality.
Heisemberg (along with Einstein and Bohr, he is the father of atomic energy), has in fact discovered that, independently of any equipment used, the Science is not able to understand reality (that is, it is not able to describe the reality in rational terms).
(it is important to understand that in the Principle it is recognized that no matter how sophisticated the system used: although the technologies used to observe reality are advanced, one cannot arrive at a correct description of reality).
3) The Butterfly effect: the evolution of a complex system cannot be predicted (meteorologist Edward Lorenz).
Another case in which Science tells us that science is not able to understand the functioning of reality (in order to make scientifically correct decisions) is the weather forecast. The problem is defined by Science as Butterfly effect.
The discovery of the Butterfly effect introduces another important aspect of the applications of artificial Intelligence:
THE APPLICATIONS OF AI
HAVE THAT TO DO WITH A DYNAMIC REALITY
(in continuous transformation).
. The discovery of Edward Lorenz shows us
how reality is a complex system
that develops a process
whose evolution cannot be foreseen.
This discovery concerns the functioning of AI systems such as those installed on self-driving cars: they must be able not only to “understand” the scenario they are facing, but they must also be able to predict how the movements of things and people will take place.
To understand the evolution of the entire System, it should be possible to create a numerical model that is not only able to analyze, moment by moment, the evolution of the single element based on its “predisposition”; but that it is also able to take into consideration that part of change of that single element that is induced by its interaction with all the other elements (which, being themselves subject to the same modes of interactive evolution, change constantly).
That is, Science shows us how it is not possible to make predictions on the evolution of a complex system based on numerical data (mathematical models, statistics, etc.).
To understand why it is really impossible to describe reality with mathematical models we must remember that Hisemberg, with the Uncertainty Principle, tells us that it is not a matter of creating more powerful computers: reality is itself not describable with mathematical models.
So how is it possible that today we still talk about Artificial Intelligence (about the ability of devices to make correct decisions instead of Man)?
The fact is that today we simply avoids treating the issue in a scientific way. We never support the news on AI with rational arguments: we remain in the field of narrative.
B – A thought experiment: if AI exists, can it be managed by man?
The mental experiment is a decisive element to understand how things that cannot be seen work (the discovery of nuclear energy came about thanks to mental experiments)
Thus, admitting that there may be a form of Artificial Intelligence (as a “theoretical hypothesis”, given that it is in contrast with the Principles of official Science), we see how it could be used by man.
In this case we must therefore also ignore the accidents of the aircraft (boing 737) and of the self-drivng cars do not create accidents because the artificial intelligence with which they are equipped “gets confused”.
Today artificial intelligence is described as something that can solve some human problems.
So let’s take one of the simplest cases of decisions today referred to AI systems: the “algorithms” that the universities use to evaluate the admission of students.
In this case, even if the system worked perfectly for the intended purposes, it is necessary to ask the question:
Who is it that programs the criteria of the decisions
taken by the algorithm?
That is, the question is, how can we be sure that an AI system can really solve problems “correctly”.
It should therefore be taken into consideration what is normally not thought:
the functioning of the algorithms
which are the basis of AI systems
is based on
DECISIONS TAKEN UP
FROM THE “PROGRAMMERS” THAT DEVELOP THE ALGORITHM.
The example of the choice of students to admit in the University is useful for understanding the question: there are at least two contrasting visions of how a correct choice should be made.
Incidentally, the two visions are:
1. it is necessary to give priority to needy people, for a moral question: since a well-off family boy will not have problems in finding a good job in life, it is necessary to favor the boy of a “weak” family (it is the predominant vision today) .
2. it is necessary to choose the best student, because in that case the University will be able to obtain the best results: leaving the University the boy will be able to produce ideas and solutions useful for Society (this is the “traditional” vision man).
It is therefore clear that in the case of choices of a social nature the role of the algorithm is marginal:
THE “INTELLIGENCE” APPLIED IN THE AI PROCESS
IS THE INTELLIGENCE OF WHO PROGRAMS THE SYSTEM.
A problem of this kind occurs today in the judgments of the Judges: in some States of the United States the Judges can use the AI systems to come to pronounce a sentence.
Consider in these cases how much algorithms (or sentences) can be influenced by beliefs, by the prejudices of those who program them.