Your gaming skills could help teach an AI to identify jellyfish and whales

Marine biologists have too many images and not enough time to hand-annotate them all. This new project wants to help.
Barreleye fish from a deep-sea video taken by robotic ocean rovers in Monterey Bay
The barreleye fish is the living version of the galaxy-brain meme. MBARI

Share

Today, there are more ways to take photos of the underwater world than anyone could have imagined at the start of the millennia, thanks to ever-improving designs for aquatic cameras. On one hand, they have provided illuminating views of life in the seas. But on the other hand, these devices have inundated marine biologists with mountains of visual data that have become incredibly tedious and time-consuming to sort through. 

The Monterey Bay Aquarium Research Institute in California has proposed a solution: a gamified machine-learning platform that can help process videos and images. It’s called Ocean Vision AI, and it works by combining human-made annotations with artificial intelligence. Think of it like the ebird or iNaturalist app, but modified for marine life. 

The project is a multidisciplinary collaboration between data scientists, oceanographers, game developers, and human-computer interaction experts. On Tuesday, the National Science Foundation showed support for the two-year-project by awarding it $5 million in funding. 

“Only a fraction of the hundreds of thousands of hours of ocean video and imagery captured has been viewed and analyzed in its entirety and even less shared with the global scientific community,” Katy Croff Bell, founder and president of the Ocean Discovery League and a co-principal investigator for Ocean Vision AI, said in a press release. Analyzing images and videos in which organisms are interacting with their environment and with one another in complex ways often require manual labeling by experts, a resource-intensive approach that is not easily scalable.

[Related: Why ocean researchers want to create a global library of undersea sounds]

“As more industries & institutions look to utilize the ocean, there is an increased need to understand the space in which their activities intersect. Growing the BlueEconomy requires understand[ing] its impact on the ocean environment, particularly the life that lives there,” Kakani Katija, a principal engineer at MBARI and the lead principal investigator for Ocean Vision AI, wrote in a Twitter post.

Post Unavailable

Here’s where artificial intelligence can come in. Marine biologists have already been experimenting with using AI-software to classify sounds, like whale songs, in the ocean. The idea of Ocean Vision AI is to create a central hub that can collect new and existing underwater visuals from research groups, use these to train an organism-identifying artificial intelligence algorithm that tell apart the crab versus the sponge in frame, for example, and share the annotated images with the public and the wider scientific community as a source of open data

[Related: Jacques Cousteau’s grandson is building a network of ocean floor research stations]

A key part of the equation is an open-source image database called FathomNet. According to NSF’s 2022 Convergence Accelerator portfolio, “the data in FathomNet are being used to inform the design of the OVAI [Ocean Vision AI] Portal, our interface for ocean professionals to select concepts of interest, acquire relevant training data from FathomNet, and tune machine learning models. OVAI’s ultimate goal is to democratize access to ocean imagery and the infrastructure needed to analyze it.”

Ocean Vision AI will also have a video-game component that serves to engage the public in the project. The video game the team is developing “will educate players while generating new annotations” that can improve the accuracy of the AI models. 

Although the game is still in prototype testing, a sneak peek of it can be seen in a video posted by NSF to YouTube showing an interface which asks users whether a photo they saw contained a jellyfish (images of what a jellyfish looks like are present at the top of the screen). 

Here’s what the current timeline for the project looks like. By next summer, the team will expect the first version of FathomNet (which is in beta right now) to be active, with a preliminary set of data. In 2024, the team will start exporting machine learning-labeled ecological survey data to repositories like the Global Biodiversity Information Facility, and look into building a potential subscription model for institutions. During this time, the modules of the video game will be integrated into other popular games as well as museum and aquarium experiences. After field testing different versions, the team will finalize their design and release a standalone, multiplatform game in late 2024. 

“The ocean plays a vital role in the health of our planet, yet we have only observed a tiny fraction of it,” Katija said in a press release. “Together, we’re developing tools that are urgently needed to help us better understand and protect our blue planet.”

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.