#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
Will Ethical Journalists Remain Indispensable Under the Threats of ChatGPT? Probably Not!

In the fast-paced world of journalism, speed and efficiency are paramount to meet deadlines and deliver news to the public. But recent advancements in artificial intelligence (AI), particularly the popular ChatGPT platform, have raised concerns among journalism professionals about the potential impact on their profession.


ChatGPT, a state-of-the-art generative AI platform, has garnered attention for its ability to generate human-like text quickly and convincingly. According to a study by Tooltester, an impressive 63.5% of respondents were unable to distinguish between articles written by ChatGPT and those penned by human journalists, across various topics including health, entertainment, tech, finance, and travel.


Though it's more powerful than ever, Forbes reported ChatGPT’s 10 limitations including typo and and biased responses. VC Star asked ChatGPT to write a news story and find out “all of the sources, agencies and quotes in this piece are fabricated.” Things mentioned above are completely betray journalism ethics and are totally unacceptable to a journalist. In fact, critics made on ChatGPT’s ethical flaws, like InformationWeek and South China Morning Post, are always there since it launched in November 2022. Pavlik (2023, 88) also argued that misleading or malicious information generated by ChatGPT is hard to distinguish which lead to the risk of the spread of fake news and other detrimental information and leave the individuals and society to serious consequences. If that means reporters are safe from being dumped as long as they work ethically?


Experts says


Wesley Wildman, professor of Philosophy, Theology, and Ethics, and Computing & Data Sciences at Boston University, pointed out that for human beings, virtues are hard to get, and it is something from learning and long-time cultivation. So are AI tools. "Generally, if AI is not going to be able to reproduce the internal consciousness settlement, but it may well be able to reproduce, the behavioral aspects are being ethical or having virtues," said Wildman. "It just needs to be trained to do, and then you have got a behavioral system that's capable of at least mimicking virtue enough to be able to determine when it's being generous, when it's being diligent or accurate, and when it's not. So being virtuous and ethical is not something that cannot be achieved by AI. But he insisted such training has to be led by human beings. "There will always be reporters because reporters are needed for training AI reporters for eliciting information from AI reporters from editing, evaluating, making relevant fashioning, you know, all of that stuff is very important, humans will take responsibility for that for quite a while, at least," said Wildman, "the information generation process, the validation process, and making sure that things are accurate that, I think, will be handled by humans for quite a long time.


QuanZhou Wu, an AI engineer at Boss Zhipin, an online recruitment platform in China, said: "AI is trained by using algorithms and a vast amount of historical data including text, images, videos, and audios." He stressed that ChatGPT is not something new at all to the AI industry, but he was still surprised by the updated ChatGPT's performance. His knowledge of ChatGPT can trace back to 2021, while at that time, it was the version of GPT-3. "I've stepped into the AI industry for seven years since I graduated, and I have to say the development of Artificial Intelligence is unpredictable and faster than I expected." 


Wu, however, doubted the 100% accurate generation by ChatGPT. "I don't think there will be a program that is perfect enough to create a perfect AI that won't make mistakes." He added: "AI algorithms are difficult to be flawless. There are many reasons behind this contributed to mistakes made by AI."


Boaz Barak, professor of computer science at Harvard University, believes ChatGPT's development is going to change the nature of jobs like what computers did, but it is a tool rather than a threat. "I think many readers do want to get the journalist's human angle, and they would want human rather than AI-generated content. Also, if an article is getting copied and published in thousands of places (like AP News), then the cost of using a human to write it becomes relatively small, so there isn't much benefit for AI. And if an article is in a local newspaper, then the main thing (which an AI can't do) is going around, reaching out to people, and getting the actual story." said Barak. 


Writer Says


So, AI could be trained and could be regulated to minimize its drawbacks. I guess training AI could be easier than training a human, though there might be a long way to go because human nature is greedy, self-indulgent, and selfish. We study so hard to be moral but amoral when facing the temptation of fame and wealth. Although Professor Wildman assured me that humans are the leaders, AI may one day be more accountable. I don’t think being moral makes journalists irreplaceable because AI’s development is unpredictable. However, I guess human beings still are the best choices for this job that needs to deal with human beings every day. We prefer to chat with real people rather than machines, which is why journalists prefer to conduct face-to-face interviews rather than video or phone interviews. Unlike AI machines, we have emotions, we can feel each other, and because of that, journalists can drive people to share their real thoughts.

Share This Post On


Leave a comment

You need to login to leave a comment. Log-in