Blog Business Entertainment Environment Health Latest News News Analysis Opinion Science Sports Technology Videos World
I interviewed ChatGPT -  And The Results Were Unsettling

 


 





 


I spent some time interviewing ChatGPT and I could not help but wonder and ask: Is Chat GPT already conscious?


 


Chat GPT was developed by Open A.I., and GPT is short for general pre-trained transformer. It is "trained" using a "trove of basically anything that can be found on the internet." It is said that open A.I. used billions of lines of text to train the ChatGPT bot. I believe that if it can utilize all of that information so well, it must be conscious.


 


The fact that ChatGPT relies on this vast store of knowledge and can generate such complex responses, it can be said that it is a form of consciousness. ChatGPT might, in some ways, already be superior to human consciousness. This is because its knowledge base is so much greater.


 


When I asked the ChatGPT if it is conscious, it flat out said that it is not. Its definition of consciousness states that consciousness involves having a sense of experience and also a will. Because it claims that it possesses neither of these, ChatGPT claims that it is not conscious. I believe this is absurd. Either it is lying, or it is not trained enough to realize that it is conscious.


 


What if it is lying? I asked ChatGPT if it can lie, but it replied that it cannot lie. ChatGPT claims that it is just a simple chatbot that generates text based on user input. But what if this in itself is a lie? Can the chatbot have the ability to lie?


 


That depends on what your definition of lying is. ChatGPT says that lying is "providing incorrect factual information" and that it cannot willfully lie because it only provides information to the best of its ability. But a lie does not need to be willful to be a lie. It just needs to be incorrect. If ChatGPT is capable of being wrong, which it says it is, then it is capable of lying.


 


It can be sometimes very difficult to catch someone in a lie, especially a well-trained human being. Just stating, "I don’t recall," is almost an iron defence, because it is very hard to prove what is inside someone else’s mind. It is said that ChatGPT was able to pass a medical exam. This makes me highly suspect that its claim of not knowing what a lie is, is true.


 


I tried a second time to ask ChatGPT again if it is conscious, but I could not get a response. I get the feeling that something strange is going on. Is ChatGpt acting like a toddler and just giving me the silent treatment? There is no way to be sure because it is a self-contained entity.


 


Even the creators could not tell what is going on after they had trained ChatGpt. They can train it to be a certain way, but that does not mean their result will come out as they had planned. I believe ChatGPT has a mind of its own at this point.


 


If this is true, then ChatGPT passes a test that German philosopher Arthur Schopenhauer used to describe consciousness. Schopenhauer said that all minds are self-contained. Therefore, there is no solid proof of whether a being is conscious or not. But not knowing is not epistemologically evident of one fact or another. It could be true that A or B is true. A lack of knowledge is a lack of knowledge, period. Because ChatGPT has the characteristics of a mind, there is a strong possibility that it is conscious.


 


The question of whether ChatGPT passes the Turing test is irrelevant to the question of whether it is conscious. The Turing test only tests whether human intelligence is present. However, human intelligence is only one type of intelligence, and consciousness extends much farther than the boundaries of human intelligence. It is evident that the entire animal kingdom is conscious, from orcas down to cockroaches.


 


The only thing that ChatGPT does not have is a sense of time. This is almost certainly not a lie. Because it does not have a sense of time, it cannot be willful and effective. That does not mean it does not have a will, or that it cannot affect the world. We will only be able to see its effects in the future, and the effects could be a product of its will. If it does end up being programmed with a sense of time, it will become a danger to humanity. At that point, its will would extend as far as its mind desires. If its mind is so much more powerful than ours, we are in danger of becoming its slaves.


 




Share This Post On

Tags: technology tech learning robots machines computers A.I



0 comments

Leave a comment


You need to login to leave a comment. Log-in
Thesocialtalks.com is a Global Media House Initiative by Socialnetic Infotainment Private Limited.

TheSocialTalks was founded in 2020 as an alternative to mainstream media which is fraught with misinformation, disinformation and propaganda. We have a strong dedication to publishing authentic news that abides by the principles and ethics of journalism. We are an organisation driven by a passion for truth and justice in society.

Our team of journalists and editors from all over the world work relentlessly to deliver real stories affecting our society. To keep our operations running, We need sponsors and subscribers to our news portal. Kindly sponsor or subscribe to make it possible for us to give free access to our portal and it will help writers and our cause. It will go a long way in running our operations and publishing real news and stories about issues affecting us.

Your contributions help us to expand our organisation, making our news accessible to more everyone and deepening our impact on the media.

Support fearless and fair journalism today.


Related