Only 4 people have been able to solve this 1934 mystery puzzle. Can AI do better?
'Cain’s Jawbone' is a scrambled whodunnit that claims to be decipherable through 'logic and intelligent reading.'
In the 1930s, British crossword writer Edward Powys Mathers created a “fiendishly difficult literary puzzle” in the form of a novel called “Cain’s Jawbone.” The trick to unraveling the whodunnit involves piecing the 100 pages of the book together in the correct order to reveal the six murders and how they happened.
According to The Guardian, only four (known) people have been able to solve this since the book was first published. But the age-old mystery saw a resurgence of interest after it was popularized through TikTok by user Sarah Scannel, prompting a 70,000-copies reprint by Unbound. The Washington Post reported last year that this novel has quickly gained a cult following of sorts, with the new wave of curious sleuths openly discussing their progress in online communities across social media. On sites like Reddit, the subreddit r/CainsJawbone has more than 7,600 members.
So can machine learning help crack the code? A small group of people are trying it out. Last month, publisher Unbound partnered with AI-platform Zindi to challenge readers to sort the pages using AI natural language processing algorithms. TikTok user blissfullybreaking explained in a video that one of the advantages of using AI is that it can pick up on 1930s pop culture references that we might otherwise miss, and cross-reference that to relevant literature made during that time.
And it’s a promising approach. Already, natural language processing models have been able to successfully parse reading comprehension tests, pass college entrance exams, simplify scientific articles (with varying accuracy), draft up legal briefings, brainstorm story ideas, and play a chat-based strategic board game. AI can also be a fairly competent rookie sleuth, that is if you give it enough CSI to binge.
Zindi required the solution to be open-source and publicly available, and teams could only use the datasets they provided for this competition. Additionally, the submitted code that yielded their result must be reproducible, with full documentation of what data was used, what features were implemented, and the environment in which the code was run.
One member of the leading team, user “skaak,” explained how he tackled this challenge in a discussion post on Zindi’s website. He noted that after experimenting with numerous tweaks to his team’s model, his conclusion was that there is still a “human calibration” needed to guide the model through certain references and cultural knowledge.
The competition closed on New Year’s Eve with 222 enrolled participants, although scoring will be finalized later in January, so stay tuned for tallies and takeaways later this month.