Blog Business Entertainment Environment Health Latest News News Analysis Opinion Science Sports Technology World
AI makes decisions

How artificial intelligence (AI) can make decisions and what consequences this can have when using it. It is one of the most fundamental questions of the entire modern "robotic" science.


Below are some thoughts on the matter.


To properly understand how AI makes decisions, you must start with the basics. Today we can talk about two main types of artificial intelligence (AI). It is AI strong and AI weak. Here, the division goes according to the principle of AI in everyday human practice.


Weak AI — methods and software systems that solve individual intellectual problems. For example, voice or face identification, autonomous vehicle, or drone control.


Strong AI is capable of not only solving intellectual problems but also independently setting goals comparable to human intelligence. To develop strong AI, we need to understand how the human brain functions.


Neurobiology has accumulated a lot of empirical knowledge about the anatomy and physiology of the brain, and molecular and genetic mechanisms. However, the general principles of information processing by the brain are not completely clear; it is only clear that they differ significantly from the principles of computer operation.


New AI technologies are programs or algorithms that can find relationships between inputs.


How does AI make decisions? The simplest case of machine learning: data input — data processing — data output (result). When we are dealing with a variant of a strong AI, everything is much more complicated. It is where the “black box” enters the scene. The strong AI deciding algorithm is arranged as follows: data input — processing (but in a “black box”, where there are free connections between quanta and their spins) — new data output.


New data can be very different. They appear without explanation. And here (slightly altering the Greek “Deus ex machina”) we have the “Demon from the machine”. Such results can become both new, unpredictable breakthroughs in science or technology, and can have very serious consequences for the environment and people in general.


Yes, research shows that the algorithms in the “black box” can resemble human thought processes. But these are not all algorithms. Do not forget that today AI processors are not yet powerful enough to handle infinitely large amounts of information.


Black box: the input is data input, and the output is the output of the result. What quantum leaps are responsible for this? Here is the problem of AI’s Free will and its responsibility. Or the responsibility of the AI’s creator, because the algorithm is pre-programmed.


Maybe the AI should only be an assistant and helper. How is it now — counting, production lines, scanning objects, playing chess, and pouring cocktails? Otherwise, AI can be held responsible for everything, including irreversible global changes in the climate. We do not need to involve it in solving such problems.


People have not yet solved the problem of Good and Evil. And we, as AI’s creators, will again plunge into medieval scholasticism with disputes about theodicy and the problem of Free Will. It is not robots or mechanisms that are terrible, but how a person can use them. We can’t predict the behavior of AI if we allow it to develop and learn by itself. When the AI begins to choose the input data itself, will no longer be responsible for the results of its activities. And this is not a divine test of free choice, but a question of responsibility.


Open code of learning AI algorithms is our direct responsibility. And today experiments are already underway. The early use of cybernetics to build automated factories and feedback systems has gone a long way. Norbert Wiener worried that such factories would push people into the background and deprive them of their jobs. But it turned out the other way around, we tacitly sanction our displacement from decision-making to reduce the burden of responsibility for the actions of our offspring.


How AI will, for example, predict the political preferences of citizens based on Big Data, not taking corruption into account. Corruption is not part of AI algorithms.


The machine learning process is based on the processing of a large amount of information taken in the previous, already completed, cycle. This is a variable that is not yet predictable. Corruption is based on personal ties, favors, tacit alliances, and family ties. But it is not present in the calculation of political preferences. AI, processing the electoral preferences of citizens, is not able to accurately predict the final political preferences based on social networks.


Only citizens who have the right to vote fall within the scope of his analysis, while citizens who do not have the right to vote can also be drawn into corruption schemes, for example, bribery of voters. We will have to state the fact of corruption and introduce it into the AI algorithm or take the results of AI’s work with political elections without taking into account corruption. Then the forecasts will not be complete.


Corruption and personal connections. Joint Efforts to Combat the Global Environmental Crisis threatened by corruption. We are passing laws to combat emissions. This limits the activities of many multinational corporations. They use personal connections, their politicians, and corruption, after all, to protect their activities.


Today’s world has become connected and global thanks to TNCs, financial flows, relocation of production, and fast Internet. But all these factors have led us to the abyss. Such a system tries to defend itself — and with the help of corruption as well.


For example, greenwashing. This is when a company invests more money in promotions, positioning itself as an environmentally friendly company than minimizing its environmental impact.


This situation will force AI to be more flexible. Today it is an algorithm for processing a huge amount of information, not open source, the results are questionable. We do not take into account the human factor, especially the subconscious. You can teach AI to copy emotions and empathize, but AI cannot yet create its own emotions. Will it be able to do it in the future?


With the development of robotics, sensors, and extended reality, such breakthroughs are quite real. But then we again face the problem of Free Will and its Creator. AI will be the result of a vast array of people, technologies, organizations, and platforms working together. Then the responsibility will be common in the global world.


So far, there is no developed formal description of the main provisions of ethics in technical research, moral aspects are often limited to their everyday, intuitive understanding. People are not always clear about the nature of advances in AI technology. There is a gap between developers, researchers, and philosophers.


Moreover, what to do with the religious, historical, and moral differences of different countries, peoples, and regions of the world. Create an individual approach to designing AI for each case separately? Buddhists have one type of thinking, Muslims have another, and Christians have a third. And the fourth is for atheists. And it is not it! In the weak case of AI, this is easy and predictable.


In the strong case of AI for individual and collective use, it is necessary to take into account the regional specifics of the invention. TNCs for different regions of the world create different recipes for their products, depending on the regional specifics. For example, Coca-Cola creates various flavors of the same beverage for other regions depending on local preferences.


Why bother creating a strong AI variant that can make decisions on its own? This is a typical example from the history of science, when scientists first invent something, then test it, and then only think about the consequences of their invention. As you know, such impacts can be not only of a material or resource nature for humankind and the Earth.


· We can create it because we can!


· Because technology has almost reached the level of development when it became physically possible.


· Because if scientist X from a well-known laboratory does not do this, some scientist Y from a competing laboratory may outstrip him.


· Because digital technologies today have accumulated such a volume of data that the creation of a strong AI will allow us to operate with such data instantly and use it to our judgment.


Some inventions and technologies carry deep existential crises for the individual and the community. The simplest example, but not the most pleasant one, is the splitting of an atom and the release of a large amount of energy as a result of this reaction. Hiroshima, Chornobyl, Fukushima, and more.


Words are superfluous here.


Share This Post On

Tags: #AI #modernscience #deeplearning #decisionmaking #strongAI



0 comments

Leave a comment


You need to login to leave a comment. Log-in
Thesocialtalks.com is a Global Media House Initiative by Socialnetic Infotainment Private Limited.

TheSocialTalks was founded in 2020 as an alternative to mainstream media which is fraught with misinformation, disinformation and propaganda. We have a strong dedication to publishing authentic news that abides by the principles and ethics of journalism. We are an organisation driven by a passion for truth and justice in society.

Our team of journalists and editors from all over the world work relentlessly to deliver real stories affecting our society. To keep our operations running, We need sponsors and subscribers to our news portal. Kindly sponsor or subscribe to make it possible for us to give free access to our portal and it will help writers and our cause. It will go a long way in running our operations and publishing real news and stories about issues affecting us.

Your contributions help us to expand our organisation, making our news accessible to more everyone and deepening our impact on the media.

Support fearless and fair journalism today.


Related