Librarians can’t keep up with bad AI

From false sources to hallucinations, it’s become a major problem.
Library aisle showing rows of books
One librarian estimated as much as 15 percent of their reference requests are AI-generated. Credit: Deposit Photos

Generative artificial intelligence continues to have a problem with hallucinations. Although many responses to user queries are largely accurate, programs like ChatGPT, Google Gemini, and Microsoft Copilot are still prone to offering made-up information and facts. As bad as that is on its own, the issue is further complicated by a tendency for these AI programs to produce seemingly reputable, yet wholly imaginary, sources. But as annoying as that is for millions of users, it’s becoming a major issue for the people trusted to provide reliable, real information: librarians.

“For our staff, it is much harder to prove that a unique record doesn’t exist,” Sarah Falls, a research engagement librarian at the Library of Virginia, told Scientific American.

Falls estimated that around 15 percent of all the reference questions received by her staff are written by generative AI, some of which include imaginary citations and sources. This increased burden placed on librarians and institutions is so bad that even organizations like the International Committee of the Red Cross are putting people on notice about the problem.

“A specific risk is that generative AI tools always produce an answer, even when the historical sources are incomplete or silent,” the ICRC cautioned in a public notice earlier this month. “Because their purpose is to generate content, they cannot indicate that no information exists; instead, they will invent details that appear plausible but have no basis in the archival record.”

Instead of asking a program like ChatGPT for a list of ICRC reports, the organization suggests you engage directly with their publicly available information catalogue and scholarly archives. The same strategy should be extended to any institution. Unfortunately, until more people understand the fallibility of generative AI, the burden will remain on human archivists.

“We’ll likely also be letting our users know that we must limit how much time we spend verifying information,” Falls warned.

There’s a good reason why librarians remained an integral component in societies for thousands of years. Unlike generative AI, they’re trained to think critically, search for answers, and most importantly, admit when they’re wrong.

 
Outdoor gift guide content widget

2025 PopSci Outdoor Gift Guide

 
Andrew Paul Avatar

Andrew Paul

Staff Writer

Andrew Paul is a staff writer for Popular Science.