Ever since artificial intelligence has taken over every area of human life, the question of its potential risk has been raised more explicitly. Every field has been equipped with automated devices and machinery that tell us how much humans have become dependent on AI for their basic tasks. Earlier this year, the department of the Center for AI Safety shared a statement about AI risk, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” This statement has ignited a debate about considering AI risk as a global existential risk that was once used to discuss in science fiction only. And now an official AI summit has been called out among the world’s most powerful leaders and developers.
Over the past two centuries, we have seen a constant advancement in human development. With the emergence of technology, many notable writers shared the risks of AI way before its invention by humans themselves and declared it a tool to undo their existence. Either it’s Mary Shelley’s Frankenstein or The Invisible Man by H.G Wells, both implicitly showing the mad scientists and their invention of artificial or alien intelligence as a potential existential risk for them. Even Isaac Asimov in the mid-21st century gave us laws for robots, to make us think about where AI should persist and what their limitations should be. This indicates how the narrative of AI consciousness has been given attention. In 1968, Stanley Kubrick’s 2001 A Space Odyssey built up a perspective of How Artificial Intelligence can use human capabilities as a dangerous tool to take control over human life.
So, the question is, is this situation alarming? For this purpose, things should be evaluated from a different perspective. In the present times, the major crisis of AI risk is more philosophical than apocalyptical. Humans, being self-destructive throughout their history, feel the same terror that they have been doing with each other. This compels them to have a strong thought that if human intelligence is capable of this kind of mass destruction that can risk a catastrophe, then its own creation of artificial intelligence can cause a potential human crisis too.
In this digital age, we are all, more or less, completely dependent on AI for fast business dealings. Many of the jobs that humans used to do have been replaced by AI, especially the labor workload. According to a new report by IT research company Forrester, 2.4 million jobs by 2030 will be replaced by AI. Now, with the emergence of Chat GPT and GPT 4, tech jobs like coders, and data analysts, media jobs like journalistic writing and reports, content creation, and other finance and marketing jobs are at risk of being replaced by AI. This fast-thriving business approach not only can cause an economic crisis for the unprivileged but can also cause aggressive tensions between developing and underdeveloped communities.
Also, AI can be a manipulative tool for the human mind, as many people around the world don’t know how it actually works. This lack of information can force uneducated people to believe in what information AI is presenting them with. Humans are rational, they make judgments on their daily experiences after a certain evaluation to make a strong opinion on ethical grounds. Unlike humans, AI does not have morals and ethics to follow and a mind to understand human behaviors subjectively. Even psychology has not yet been able to give a rigid opinion over the complexity of the human mind. This fixed set of the approach of AI will make the vast majority to questions their own judgments unknowingly. We can say that the past proficient writers have already warned humans of the potential risk of AI, but as Aldous Huxley described in his book, Brave New World:
“Facts do not cease to exist because they are ignored.”
We can see this happening all around the world How misinformation has taken over the space of facts. So, this can cause a mental _ crises among humanity and can block critical thinking.
Even if we ignore this aspect, unlike humans, it is not self-aware in nature, but with the rapidly automated machines, it has the expertise to create new faces, new languages, new alternative ideas, and creative contents as well. This can create unbiased judgements with apparent realities without analyzing the hidden intentions as AI is faster in thinking and making judgements in their responses. So, with the global risk of Wars, AI itself can be used by the people themselves to cause a catastrophe. For humans, if they assign with a task, they do it with a thought process and evolve when it becomes crucial. But for AI, for instance if they assign with to blast a nuclear bomb, it becomes a life objective to complete it. This rigid approach can create a risk to human crisis.
As Stephen Hawking once said about AI,
Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So, we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.
The AI consciousness is a later extreme threat to humanity, but its unconsciousness has already created a present threat to the masses unknowingly. No one knows what future can bring upon us, but the potential risk about AI is now a clear serious concern with the happening of AI Summit. The world powerful leaders and tech inventors has seriously shown their concerns over AI and its ethical limitations.
AI itself will not blow up the world, but its rapid usage without giving a thought can cause an erosion of human capabilities and it can be used as a weapon to bring back the slavery in the advanced way over them. As Jeanette Winterson said in Frankissstein which is inspired by Frankenstein,
“The sum of all he has learned is from humankind. _ He is, in other words, a product of machine learning. And suffice it to say that he destroys his maker in the end.”
Share This Post On
Leave a comment
You need to login to leave a comment. Log-in