Raise your hand if you’ve ever stared at an academic paper, brows furrowed, wondering what the authors were trying to say. Bonus points if you’ve pulled up a separate tab to google any jargon that you came across.
The good news is that you’re not alone. A 2017 eLife study found that science papers from the mid-2010s were more difficult to read than papers from the 19th century. Features that seem to plague most modern academic papers are poorly constructed sentences, questionable word choices, as well as unnecessary jargon and obscure acronyms. This can make scientific knowledge hard to access, for both junior researchers in the field and for those without a science background.
So can artificial intelligence help? A new AI project called tl;dr papers sought to tackle this challenge by using machine learning to comb through the abstracts of wordy research papers and spit out a pithy summary of its contents that even a 7-year-old can understand.
The Verge reported that although this program was first created almost two years ago, it went viral over the weekend when academic researchers input their articles into it and shared the summaries it generated on Twitter. Some of the summaries were shockingly accurate and simple, while others were laughably off-mark.
For example, the AI summarized the concept of the “the glass cliff” as “a place where a lot of women get put” and a “bad place to be.” The paper’s author, Michelle Ryan, director of the Global Institute for Women’s Leadership at the Australian National University, told The Verge that while it was accurate, it didn’t offer a lot of nuance. Ultimately, she and other researchers that The Verge reached out to acknowledged that it was a “fun tool” that could show scientists what it looks like to “write in a way that is more reader-friendly.”
Despite its popularity, tl;dr papers’ creators have told The Verge that they intend to sunset the product (the site is currently under maintenance), offering alternative tools like the AI summarizer created by The Allen Institute for Artificial Intelligence.
[Related: This new AI tool from Google could change the way we search online]
This is not the first time humans have turned to robots to synthesize briefs. In 2017, a company called Primer employed AI to create intelligence reports for spies based on incoming data and information. Two years later, The New York Times reported that AI-powered robot reporters were assisting a number of newsrooms around the world. A robot powered by the same neural network that the tl;dr papers runs on even keyed an op-ed in The Guardian about the threat of AI.
The neural network in question is called GPT-3, a language-writing AI tool created by OpenAI. It was trained on around 200 billion words, and it has learned to code, blog, and argue. The AI was so impressive that developers are trialing its use on legal documents, customer-service enquiries, text-based role-playing games and more, Nature reported.
Although it can be at times funny and uncanny, Vox noted that it was not intelligent, meaning that it didn’t really understand the world beyond the notes it receives; it works by parsing out the relationship between words and phrases. Nature reported last March that GPT-3’s creators were working on teaching the AI to search for concepts rather than specific words or phrases.
But common sense, unfortunately, is still lacking for the AI. That’s why, like most large language models, it can sometimes make mistakes, produce nonsense, give dangerous information, or be negatively biased. Maybe patching these models through to a large database of facts is the solution, Nature proposed. Or perhaps, these robo-writers could benefit from a human editor.