written by Goldera Surles
Abraham Lincoln said it himself: don’t believe everything you read online just because there’s a picture with a quote next to it. Our former president may not have said that statement either, but the truth remains. Amidst the world of honest journalism is the dark web of fake news, and artificial intelligence is at its forefront.
Fake news consists of false or misleading news reports developed to misinform or deceive readers. Fake news can come in various forms, including news articles or political memes on social media platforms. While some types of this news can be hilarious, it can also damage an organization’s or a person’s reputation. According to a study conducted by the Pew Research Center, 94% of participating journalists expressed how fake news stories and information are an enormous concern in America.
That concern stems from more than just humans because AI (artificial intelligence) can contribute to misleading news. Real-time information company AppliedXL’s co-founder Francesco Marconi stated in part of an article that there were many examples of news publishers automating content in the world today. As mentioned by one of its senior journalists, the company Reuters uses the Reuters News Tracer as an AI filter that verifies information about events that might be newsworthy. Agence France-Presse, a French international news agency, utilizes the AFP Transcriber, which also incorporates artificial intelligence in voice recognition.
While AI appears to be a potent tool for battling inaccurate information and enhancing local news, The Guardian editor Ian Tucker broke down his feelings about chatbots. He states they have a reputation for manufacturing truth and inventing sources. When I tried to fact-check something using ChatGPT, an AI chatbot developed by OpenAI, the bot suggested inaccurate information. AI can become an unreliable source of facts for reporters. The integration of journalism and artificial intelligence can confuse journalists worldwide.
There is also a dark side to AI emerging as a formidable threat to the truth of journalism. The potential for its misuse is becoming more evident as individuals harness its capabilities to orchestrate immoral criminal activities. A CNN Business article unfolded a story regarding the detainment of a man in China who utilized AI to spread online rumors. The suspect used ChatGPT to fabricate a news report about a train crash, which he later uploaded online intending to acquire revenue.
From an ethical standpoint, this man’s decision defied John Stuart Mill’s principles of utility. According to the book “Media Ethics: Cases and Moral Reasoning,” the crucial point of this principle is that the best way to distinguish right from wrong is to produce the greatest amount of good or happiness (20). The specifics of the provided news report failed to promote the most happiness as there is sensitivity regarding the topic. That painful underlying reason for its susceptibility is authorities were under pressure to explain why state media failed to react appropriately to a bullet train collision that killed 40 people in Wenzhou in 2011. The Chinese man immorally used AI to create and disseminate misinformation, which led to the alteration and harm of public opinion.
With this arrest being one of the earliest criminal cases involving an AI chatbot in the nation, a pressing question arises: Is AI a peril to journalism? The ethical integrity of journalism hangs in the balance of AI as a news supplement. The case of the man in China depicts how the intricate web of AI-generated fake news and its implications affect the credibility of journalism. Although AI is not inherently unethical, decisions made by those who use it unjustly can have significant consequences.
AI has had a profoundly transformational effect on the news industry. Its capabilities include large-scale data analysis, automated content creation, and personalized news distribution. To preserve the integrity of journalism, journalists, news organizations, and technology developers must play a vital role. Journalists and reporters should stick to the old-fashioned way of developing news stories to protect the future of journalism. AI-generated news stories are not the most factual. The continuous use of this software will destroy the beacon of truth in journalism. They should uphold the principles of honesty and transparency within their reports and consider the broader moral implications of their actions on society. Journalism must remain a force for truth, enlightenment, and human welfare in the face of the dark side of technology.