#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
The implementation of the "great program" of the positivists for humanity in the 21st century: what results to wait for?

The "great program" of positivism of the last century - to reduce all philosophical discussions to formulas and proven statements - is ready to be completed. An attempt to turn a philosophical discourse into a competition of proven formulas is precisely an attempt to anthropologically reduce the essence of a person to some specific permanent formula expressed symbolically.

Machine algorithms - positivism of the 21st century

To execute this ontological program, a fundamental problem must be solved: how AI makes decisions. AI may not always be predictable. One thing is when machine algorithms process large arrays of data because it does not require decision-making in the human sense of the word. The algorithm exists just for this. But another thing is making a decision. The basic principle of operation of AI algorithms - the so-called strong version - is inputting data and obtaining a result based on them at the output. But what exactly in this case affects the decision of the AI, the person still has no control. There is no guarantee of the correctness of the decisions made from the point of view of human morality. It depends on how oppositely charged quanta of energy affect decision-making. The machine algorithm is guided only by the principles embedded in it by the developer.

However, AI performs the main tasks of positivists. But predictability lies in an algorithm with unpredictable processes. Their creations- machine algorithms- will do what humans and their biological brains could not do.

An informational or cybernetic obstacle that stands in the way of team execution is directly related to the existence of the philosophical principles of simplicity-complexity. Everything that does not fit into the formula 1 and 0 is an information barrier (IB).

IB in social relations makes us who we are. This individual perception of the environment is as unique as a fingerprint. The task of machine algorithms is either to find out these obstacles and eliminate them or to bypass IB altogether and pretend that they do not exist.

Rejecting IB: what it means for a human

There should be some kind of simplification in the IB system. IB is part of the structure of the following system: system structure+IB, where IB plays the role of an undefined element. IB creates epistemological noise that is unacceptable to AI.

According to the intention of the creators of the machine algorithm, this element is a complication of the structure of the system under the general name "human-product". Here, the role of a product can be anything from a toothbrush to a presidential candidate.

For example, let’s consider the system: a family trip to a picnic. The concept of such a system is, of course, the ultimate goal, which is a picnic and a trip with the whole family. The substrate is the whole family, the car, the route, and food supplies for the picnic, the dog, etc. As to the structure of this system, it is simple for the machine algorithm: it is necessary to build the fastest and shortest route, preferably avoiding road traffic. But what about the small lake, with which the family can have pleasant memories and which they always saw when going out of town, which for some reason, due to the appearance of a map application on a smartphone, they began to bypass recently? The reason is that the algorithm is not configured for this! It is tuned to simpler optimization, which is not always close to such a purely human problem as a choice. The choice is not a problem at all for an algorithm in the cybernetic sense of this meaning.

This is a structural simplification because in this case the AI "disables" the IB from the system structure. Modern algorithms not only "help" us to choose, but begin to control our choice.

A quantum computer isn't an option either—it's not even positivism

That's something to keep in mind when it comes to such an anticipated, but still mostly theoretical, quantum computer. Its algorithm can increase the speed of calculations many times compared to an ordinary computer standing on your desk. It does this thanks to the fact that scientists have learned to operationally use not only 0 or 1, but the distance that exists between 0 and 1. This enables quantum genius ever faster. But here is "the question is as old as the world": who can verify it? Only in practice are people able to test the results of their theories through trial and error or preliminary modeling. Only practical and causal results become the basis for the conclusion that this hypothesis or theory has the right to exist but within certain limits of verified results. What about "quantum" results? We can hardly imagine how they can exist at all. And what to do when we rely on them for complex calculations of space travel over long distances or for modeling the behavior of dynamically existing systems with several constantly changing parameters (such as modern climate change on Earth)?

In such cases, a person will have to decide on his own. Just like we did in past centuries. Undoubtedly, it is possible to develop and test the reliability of several AI-based operational tools that can speed up human decision-making. This is just a feature to look forward to, but in reality, we will be waiting a long time to use it in everyday life. Of course, it must be assumed that such mechanisms already work today in large companies or some government agencies. The amount of information is growing every day. Processing a database of customers or citizens for further work with it can no longer be sufficient. Decisions must be made based on these conclusions. And someone will have to do it... Therefore, sometimes the decisions made from the point of view of the "everyday person" look artificial or completely unacceptable. After all, it is the "everyday person" who must implement such decisions. In the future, such a situation may lead to a paradoxical situation: simplifying and speeding up life thanks to decisions made based on the conclusions of AI, or machine algorithms, will lead to the social and emotional degradation of those who need these decisions, namely humans.

It is IB that makes us human. This element is unpredictable, but most importantly, it does not matter to the machine algorithm. However, such a "machine simplification" of human virtual existence nullifies centuries of social evolution and millennia of biological development. Here, a rational algorithm loads the human brain around the clock but forgets about emotions. The emotional state is not taken into account, because the machine algorithm calculates such a state as information noise. The simpler the system, the easier it is to manage. The lack of proper loading of the emotional part of the human psyche (this does not mean the absence of video or audio materials, for the perception of which passive consumption is enough) leads to the fact that the denial of an undefined element of the human psyche, which does not fit into the positivist formula of 1 and 0, oppresses a person. As a result, we have unexpected and uncontrolled emotional actions. We can't wait any longer, although we're in no real hurry. But any, even objective, delay in making a decision by a person or satisfying one's whims is perceived as deliberate actions. Although AI itself in this sense remains alienated and objective.


What a machine algorithm makes easy to understand can become difficult for the emotional part of the psyche and vice versa. For example, when the weather forecast and wishes for the day generated by common applications do not correspond to the real state of affairs. The algorithm cannot generate the contextual meaning it does not understand.

This situation leads to the problem of decision-making and its positivist verification. Deciding on a machine algorithm and its existential significance for a person becomes a fundamental problem. Simplification of the "Human-AI-Environment" structure, if it should happen, is not at the structural or conceptual levels.

Although maybe we are just not ready for this yet...

Share This Post On


Leave a comment

You need to login to leave a comment. Log-in