SHARE

In recent years, psychedelics have made their way from spiritual ceremonies and music festivals to clinical trials for treatment of addiction, PTSD, and depression. Oregon and Washington D.C. have already taken steps to decriminalize certain psychedelics, and ketamine and psilocybin clinics popping up across the US.

“It’s a little bit of the Wild West,” says Sam Freesun Friedman, a senior machine learning scientist at MIT and Harvard University’s Broad Institute.

But psychedelics are still largely illegal in the US, in part because of how unpredictable they are. Reactions to different psychedelics vary widely: Some users experience healing or overwhelming euphoria, while others come away with scarring trauma or terror. These reasons, among others, make it difficult for these drugs to get approved by government agencies and make their way into doctors’ offices.

This week Freesun and researchers from SUNY Downstate Health Sciences University and McGill University published a paper in the journal Science Advances proposing a unique method for better understanding the interaction between hallucinogenic drugs, people’s brains, and different types of psychedelic experiences. They did this by using artificial intelligence to look at real-life accounts of psychedelic experiences and compare them to how human brain chemistry engages with drugs on a molecular level. However, while the researchers’ methods and goals push the envelope forward on understanding how psychedelics can help or harm individuals, the data they use could be unreliable. 

To gather real people’s descriptions of psychedelic trips, the team used a nonprofit website called Erowid that has more than 40,000 anonymous, user-submitted anecdotes of people taking psychoactive drugs. For the first dataset, the researchers mined almost 7,000 of Erowid’s written narratives about 27 drugs including LSD, ketamin, MDMA (also known as molly or ecstasy), and psilocybin (the active compound in magic mushrooms). They then used a natural language processing tool to look for similarities in descriptive wording both in experiences with the same drug and between different drugs, Freesun says. 

For the second dataset, the authors tapped into past research on how each psychedelic interacts with human brains on a molecular level. Specifically, they looked at binding affinities, which quantify how well a molecule from a drug attaches to a particular neurotransmitter receptor. They then used a form of machine learning to find connections and patterns between the neurotransmitter receptors associated with each drug and the sensations people described while taking the substance. 

Based on this analysis, Freesun and his collaborators found eight categories of receptor-experience combinations which he says can be thought of as the Big Five personality traits for psychedelic experiences. Just as some might score an individual’s personality on openness to experience, conscientiousness, extroversion, agreeableness, and neuroticism, the researchers show how each drug or trip could rank on the spectrum of factors such as conceptual versus therapeutic, euphoria versus terror, and relaxation versus nausea.

[Related: What happens when psychedelics make you see God]

The implications of these findings envision a future where scientists could alter a drug chemically to get the desired experiential effects for patients. For instance, this approach could be helpful in maintaining the therapeutic effects of a psychoactive drug while minimizing the terrifying experience usually associated with it, Freesun says.

“Finding a data-driven way to structure those experiences to maximize therapeutic benefit, I think is something we can all get excited about,” he adds. 

But the foundations on which the study was built are faulty, says Bryan Roth, a professor of pharmacology at the University of North Carolina School of Medicine and the director of the National Institute of Mental Health’s Psychoactive Drug Screening Program (NIMH-PDSP). While Roth thinks the paper’s methods pose an “interesting idea,” he says both Erowid and biological data are unreliable, along with the paper’s conclusions. 

To start, Roth says that Erowid doesn’t verify the chemical makeup of the drugs described in each narrative. “In a large number of cases, the drugs that are purchased from the street are not the drugs that the person thought they bought, particularly when it comes to psychedelic compounds and hallucinogens,” he explains. As an example, Roth points to how two US Military Academy cadets recently overdosed on cocaine that was actually laced with fentanyl

This, according to Roth, presents an issue when trying to draw connections between the narrative data and how each drug behaves in the brain; a study could be using words from someone who took mislabeled MDMA and associating it with the effects of actual, pure MDMA. Erowid itself even has an independent lab that studies samples of drugs bought on the street. In 2021 it analyzed 747 drug samples sold as MDMA—a quarter of those samples contained other compounds or no MDMA at all.

Freesun agrees that street drugs can have impurities or be mislabeled, but he says there’s no reason to believe the inaccuracies are prevalent enough to cast doubt on the paper’s findings. His team checked the narrative data by stratifying it by gender and age to see if that skewed the findings. They concluded that the results from the subcategories were still highly consistent with the dataset as a whole. 

The second critique hits a little closer to home for Roth. The Science Advances paper cites a 2010 PLOS One publication by Thomas Ray as one of the two primary sources for its binding affinities matchups. Ray’s article relied on screening data from NIMH-PDSP, the lab Roth runs—but he says that information isn’t solid enough to be analyzed for further drug research. 

“​​What we tell [other scientists] is, if they want to publish the data, then we need to replicate it at least three times to make sure the values are correct,” Roth explains. He notes that he told Ray that NIMH-PDSP didn’t have the resources to replicate the data to prove its accuracy. Roth had picked out several incorrect values himself, and therefore, didn’t think the binding affinities should be accepted as fact.

“He published it anyway,” he says. Freesun responds that his team was not aware of Roth and Ray’s conversation, but points out that more than 200 other papers cite that same dataset.

But even if the binding affinities dataset was reliable, it’s the wrong metric to use for the new study, Roth says. Binding affinities don’t show how well a drug activates a neurotransmitter receptor, he explains, so a compound could rank as having a low affinity with a receptor but still have very high potency. On the flip side, a psychedelic compound could have a high affinity with a certain receptor but end up blocking it, Freesun says.

[Related: The tasty chemicals flavoring the edible cannabis boom]

Freesun also agrees that binding affinities don’t tell the whole story, and that using data with a more direct representation of how a psychedelic compound interacts with receptors would be a huge advancement for future research. However, he asserts that the findings of the paper are still relevant and that the statistical and AI tools his team used were purposefully chosen to filter through the “noise” or inconsistencies in data to find patterns.

“The study is motivated by the question of what we can find despite [the noise],” Freesun writes in an email. “The large number of confirmatory findings … convinced us that there is a signal to be found amidst the noise.”