Can journalism assisted by artificial intelligence be trusted?

AI in Journalism
(Image credit: Getty Images / SEAN GLADWELL)

When asked if the use of AI by journalists would reduce reader trust, Hilke Schellmann offers a blunt response.

Schellmann, an Emmy award-winning investigative reporter and author of The Algorithm, a recently published book on how AI has taken over the world of work, gives a three-word response to Laptop Mag.

“Yes, it would [reduce reader trust]," says Schellmann, a 2022 AI Accountability Fellow at The Pulitzer Center and New York University journalism professor. 

Artificial intelligence’s hold on the technological world has never been stronger, with every industry searching for ways to incorporate it. Laptops are no exception, with the recent launch of the latest Copilot+ PCs being one example, alongside the increased focus on AI in processors from Intel, AMD, Qualcomm, and Apple.

“People turn to news and news organizations because they want the facts. They want verified information,”  Schellmann says. “There would be a lot of misfires [and reporters relying on AI] would quote the wrong people.”

It’s no secret that this newfound fascination with AI has made an impact on journalism. For one, journalists can use AI for research, finding interview sources, summarizing documents, and brainstorming article angles. 

Meanwhile, fears abound over chatbots and Google AI Overviews taking original reporting and presenting it without sourcing results in websites not receiving the audience they need to keep running. But the ways in which AI has found its way into the industry go beyond just that, as journalists are now using these AI tools in their own work.

Recently, PCMag highlighted this in one of its articles, which discusses the six ways artificial intelligence has entered journalism. Some examples include using AI to find interview subjects, asking chatbots to acquire information they couldn’t get on Google, and brainstorming new article ideas.

As information sources have grown exponentially with blogs and later social media, it’s become more difficult than ever to discern real journalism from propaganda or, most recently, entire “news” websites created by AI for a few hundred dollars.

Should you trust journalism assisted by AI? 

Schellmann says, “ChatGPT is a little bit more limiting because it gets you one result.” Schellmann goes on, “The problem with GPT is it’s really hard to trust the information because it has so many hallucinations in it.”

According to Schellmann, the most important tenets of journalism are accuracy and “factually correct content,” so “if we really want to employ AI tools, we have to measure how good these tools are at these really important criteria.” We can “use these tools according to [that criteria].”

“If you have 30,000 pages that you got as a dataset, there's no way that you can go through all of them.”

Hilke Schellmann, investigative reporter

“If the tools aren't 100% accurate,” then we must “understand what the limitations are and then have use cases.” As an example, Schellmann highlights Google Pinpoint, a data extraction tool made by Google for journalists that can provide key people and locations. “If you have 30,000 pages that you got as a dataset, there's no way that you can go through all of them.”

If it lists 12 police stations within a requested data set, even though “you don’t actually know that it’s 100% a fact,” you could “think about the words you’re writing” and express that it’s a “95% accurate tool.” Schellmann reiterates that “knowing the limitations of tools can be really helpful to understand the different use cases and how valid this information is.”

Schellmann doesn’t think a Google search results page is a perfect arbiter of data, either. 

“It’s interesting that we, these days, think Google search is sort of an objective way of researching. Because it feels like we, as humans kind of control it; we control the inputs. 

"But obviously, that's also already an algorithmically sorted list of results. Instead of getting one answer, we get a bunch that we can choose from. But we don't choose from all of them. We don't go to the last page. We go to the top and look at the first few hits."

Jonathan Soma, a data journalism professor at Columbia Journalism School who teaches about responsible use of AI in the newsroom. He has a slightly different perspective when asked if trust in journalism would erode as a result of AI being used.

“For all of the flaws that exist around AI, reader trust is pretty low on a totem pole.” Soma tells Laptop Mag, explaining how “issues with reader trust that exist in journalism are not a result of AI.” He adds it’s more a case of “social and societal issues.” Soma observes that “it is possible that people would say, ‘Oh, journalists are just using AI. We can't trust them.’” 

But it’s not the reason why journalism has a trust problem. (Confidence in mass media matched a historic low in a 2023 Gallup poll).

You have to “fact check like crazy because there's no ability to judge whether it is accurate or not.”

Jonathan Soma, a journalism professor

Soma understands the weakness of incorporating AI in the newsroom. “Anything involving truth, AI has no ability to make that sort of judgment call.” He explains how even if you try to summarize or find something in a document, it’s “very easy for these [language] models to hallucinate and make statements that have no grounding in truth but may be statistically plausible.”

“All [these language models] are doing is predicting the next word, which becomes a sentence, a paragraph, a response, and a conversation. And it has nothing to do with the truth.” Soma explains that “if you are using AI tools to search through documentation in order to find an answer or marketing materials in order to find what is interesting,” then you have to “fact check like crazy because there's no ability to judge whether it is accurate or not.”

Soma provides an example of something he does during his talks: “I have a whole schtick where I'm like, ‘Here's what GPT says about me.’ And based on how you ask the question, it'll give different answers. It will talk about things like a master's degree that I do not have. You can ask a follow-up question about where my master's degree came from, and it's like ‘The University of Denver [or] Columbia Graduate School of Journalism.’ All of these places that I definitely don’t have a master’s degree from.”

What’s next 

Whether AI’s use in journalism will negatively affect reader trust seems to be in the air at the moment. Both experts have doubts about using AI chatbots to gain information and say that you’d need to do tons of fact-checking for it to work. Even then, the AI’s biases would still be present.

Even though Google has its own biases, Soma thinks ChatGPT is “much worse,” and Schellmann says, “It's really hard to say until we do large-scale studies and compare Google research to ChatGPT."

MORE FROM LAPTOP MAG

Category
Arrow
Arrow
Back to Apple MacBook Pro
Brand
Arrow
Processor
Arrow
RAM
Arrow
Storage Size
Arrow
Screen Size
Arrow
Colour
Arrow
Screen Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 278 deals
Filters
Arrow
Load more deals
Momo Tabari
Contributing Writer

Self-described art critic and unabashedly pretentious, Momo finds joy in impassioned ramblings about her closeness to video games. She has a bachelor’s degree in Journalism & Media Studies from Brooklyn College and five years of experience in entertainment journalism. Momo is a stalwart defender of the importance found in subjectivity and spends most days overwhelmed with excitement for the past, present and future of gaming. When she isn't writing or playing Dark Souls, she can be found eating chicken fettuccine alfredo and watching anime.