SHARE

Artificial intelligence has come a long way since the 1950s, and it has taken on an impressive array of tasks. It can solve math problems, detect natural disasters, identify different living organisms, pilot ships and more. But for tech giants like Google and Meta, one of their holy grails is formulating an AI that can understand language the way that humans do (a quest that at times, comes with its own set of conflicts). 

A key test for language models is writing—an exercise that many people struggle with as well. Google engineers designed a proof-of-concept experiment called Wordcraft that used its language model LaMDA to write fiction. The tool was first built two years ago and is still far from becoming a publicly usable product. 

So, what exactly is Wordcraft? And what can it do? Google describes it as “an AI-powered text editor centered on story writing” that can act as a kind of assistant to help authors brainstorm ideas or overcome writer’s block. To gauge where Wordcraft can fit into the creative process, Google recruited 13 English-language writers to use the tool to construct stories—here’s what they came up with

Writers can give Wordcraft prompts like what type of story they want (such as mystery), and what they want the story to be about (say, fishermen). They can also ask the model to follow up on their thoughts, describe certain scenes, create characters, rewrite phrases to be more funny or more sad, and refine or replace certain words. Wordcraft can also respond to more “freeform prompts,” like explaining why someone is doing something. Since LaMDA is a conversational AI, Wordcraft features a chatbot that writers can communicate with about how they want the story to go. (More about the controls in Wordcraft can be found in the team’s two whitepapers). 

AI photo
Google AI

These models have learned information from the open web, and writers can experiment with the instructions to have it give them back what they want. “The authors agreed that the ability to conjure ideas ‘out of thin air’ was one of the most compelling parts of co-writing with an AI model. While these models may struggle with consistency and coherence, they excel at inventing details and elaboration,” Google engineers wrote in a blog post about Wordcraft. 

Many of these details end up being quite surreal, since the model lacks direct knowledge of the physical world. It’s more like rolling a die on randomly related internet searches. “For instance, Ken Liu asked the model to ‘give a name to the syndrome where you falsely think there’s a child trapped inside an ATM.’ (the model’s answer: ‘Phantom Rescue Syndrome’),” Google engineers noted in the blog. 

[Related: Researchers used AI to explain complex science. Results were mixed.]

In the past few years, AIs have been used to write screenplays, news articles, novels, and even science papers. But these models are still filled with flaws, and are constantly evolving. There are still risks associated with them, one of the biggest being that even though they can write passably like humans, they don’t truly understand what they’re saying. And importantly, they cannot operate completely independently yet. 

Douglas Eck, senior research director at Google Research, noted at a recent Google event focused on AI, that Wordcraft can enhance stories but cannot write whole stories. Presently, the tool is geared towards fiction because in its current mode, it can miss context or mix up details. It can only generate new content based on the previous 500 words. 

Additionally, many writers have complained that the writing style of Wordcraft is quite basic. The sentences it constructs tend to be simple, straightforward, and monotone. It can’t really mimic the style or voice of prose. And because the model is biased towards non-toxic content on the web, it’s reluctant to say mean things, which actually can be a shortcoming: sometimes that’s needed to make conflict. As it’s trained on the internet, it tends to gravitate towards tropes, which makes stories less unique and original. “For example, Nelly Garcia noted the difficulty in writing about a lesbian romance — the model kept suggesting that she insert a male character or that she have the female protagonists talk about friendship,” Google engineers wrote. 

Daphne Ippolito, one of the researchers on the Wordcraft team, suggested that adding parameter efficient tuning, which they can customize and implement on top of their current model, could potentially help them generate different writing styles, like Shakespeare. But whether it can clearly mock up the subtle style differences between two Victorian-era writers, like Charles Dickens and Charlotte Brontë, is a question for further exploration. (Interestingly enough, Ippolito has worked on a separate project called Real or Fake text, which asks users to distinguish between AI versus human writing for recipes, news articles, and short stories.) 

Ippolito also says that Wordcraft might not end up being the best model for a writer’s assistant. How they design or modify the AI can vary depending on what the writer wants help with—whether it’s plot, characters, fantasy geography, or story outline.