#TrendingNews Blog Business Entertainment Environment Health Lifestyle News Analysis Opinion Science Sports Technology World News
AI and the Future of Journalism

Image via www.vpnsrus.com 

Earlier this week, photos of former president Donald Trump being arrested went viral online. 

All of the photos, however, were fake. 

They were created using artificial intelligence or, as it is known, AI-generated. One had the police chasing Trump, another lifting him off the ground, and finally, one walking him to a police car. Many flocked to social media sites like Twitter to discuss the images. 

They struck some as accurate because of the possibility of Trump facing an indictment over paying Stormy Daniels, an adult film actor, hush money to cover up an alleged affair. In addition, the former president is known for theatrics - look at his rallies - so people assumed that these photos were in the realm of possibility. However, looking at the images more closely, you would find hints that these photos were inaccurate. 

One of the primary ways you can identify an AI-generated image is by looking at the hands of the people in the photo. According to a BBC article, Henry Ajder, an AI expert, says the technology cannot accurately portray certain body parts such as hands. The same BBC article mentions that checking news sites is an excellent way to avoid being tricked. However, this brings into question how much of an impact AI could have in the future of photojournalism and journalism. While it may seem like a doom and gloom issue, AI technology is rapidly growing and affecting numerous fields. Artists have been warning and fighting against AI art for months, and students have been using ChatGPT to write essays. On TikTok, there is a video trend of former presidents playing video games with their voices entirely AI-generated. There is also the issue of deep fake pornography, which has been around for years. Arguments exist that this is just the next stage in humanity's technological advancement, and AI should be used, not feared. However, this quickly raises numerous ethical questions. AI art functions because it compiles work from multiple artists, often without their permission. With AI-generated images readily available, events could become entirely made up. While there is the idea that everyone should remain vigilant, all it would take is for one of these images to slip past an editor to destroy the trust earned over the years.

Photojournalism is a field with 70 percent being at the right place and time, 10 percent luck, and 20 percent skills. However, AI has the power to change that. Just feed a prompter a few keywords, and out pops your image. Right now, it may not be perfect, but in a few months, who knows?

The future of many industries, such as journalism, looks bleak. While this is, of course, thanks partly to the antagonistic role of the previous presidency, artificial intelligence has now entered the ring. The Trump administration's decisions to make the media seem like liars compared to them did hurt journalism. This was also supported by right-wing news organizations such as Fox, who, as Stephen Hayes from the Dispatch in a New York Times article says, does "affirmation journalism" despite the "good journalists" there. But now, anyone can push their agenda with AI. Even after these "photos" of his arrest went viral, Trump uploaded another of him kneeling in prayer. 

The world is now in an unprecedented technological era. With so many ease of access and convenient tools at our fingertips, everything is changing - even newsgathering. In the past, news companies did not exist to tell their audiences what they wanted to hear. And for the most part, they still do not. But, at the same time, being neutral in journalism is becoming a thing of the past. This is primarily because of the many social issues, such as racism, social inequality, and homophobia, that have slowly been rising. So many lives are being affected by these issues, and it's getting impossible to justify why you should present both sides to the argument when one side is being more affected, and the other is biased. Now throwing AI into the mix makes an already tenuous situation tenser. Fake images and AI-generated voices have the power and potential to push communities to do harmful things. 

As journalists, it is our duty to the people to minimize harm. We need to make information accessible and readily available to prevent misinformation. As the previous New York Times article says, one way to do this is to stop putting pieces behind a paywall because it is no longer working out. Moreover, the business model must be tweaked to better present information to the people. This, of course, leads to the problem of how to engage the disengaged. 

Furthermore, the last point from that same NYT article is about supporting local journalism. This would attract those who only regularly consume the news because people care about issues that affect them. Putting reporters more on the ground in neighborhoods and finding and reporting on what affects these citizens is an excellent way to bring them to the table. 

In an age where technology has evolved so much, and regulations seem inadequate, artificial intelligence needs to be regulated. There should be laws written with experts' advice to prevent the misuse of this technology. However, in the meantime, journalists must be extra vigilant and in touch with their local communities, ensuring information is accessible. Otherwise, there will be an overcompensation to prevent AI harm, as is happening with TikTok currently, but it will be mostly floundering around in the dark.


Edited by: Whitney Edna Ibe


Share This Post On


Leave a comment

You need to login to leave a comment. Log-in