Your Flickr photos could help scientists keep tabs on wildlife

A new method of “passive citizen science” is seeking to discover plants and animals that you may not have known were in your photos.
A photograph is being taken with an iPhone.
Amateur photographers could help citizen science. Alec Cooks / Unsplash

Share

Have you ever had an animal or even a plant photobomb pictures that you then shared publicly onto social media? Even if you didn’t mean to, you may be helping conservation scientists find out more about where the wild things are. As ecosystems rapidly change, scientists are scrambling to understand how different plants and animals are responding, and whether they’re staying put, or popping up elsewhere. 

A group of researchers from Cardiff University in Wales set out to see if an online community photography hub like Flickr could be used to track invasive and domestic species distribution throughout the United Kingdom. Their case study was published last week in the journal PLOS ONE

Thomas Edwards, a computer science researcher at Cardiff University and an author on the paper, says that he and his colleagues became interested in using social media for “passive citizen science” upon learning that many conservation organizations struggled to get enough funding to hire experts to go and do extensive field observations. 

[Related: These free-floating robots can monitor the health of our oceans]

The project had unlikely roots: Some groups at Cardiff at the time had been using social media to detect riots in crowds. “We were talking about bringing a similar thing over, because there’s even more people taking photos of a house or cat or ivy. There’s lots of people who go on nature walks,” Edwards says. “This is all very good information, and with social media, it’s all geo-tagged up. You can pin it down almost better than [with] traditional studies.” And even if a picture is focused on a specific animal, like a pigeon, there might be other species lurking in the background that no one’s thought of tagging. 

Active citizen science has grown to become an incredible resource for gathering ecological information. “The downside of it is that it does rely on these drives, and someone’s being paid somewhere to set up a campaign to do it,” Edward explains. “You get nugget groups who will give you very beautiful data, but you’re not getting Aunty Doris from down the road, taking a photo of a bird in her back garden.” 

[Related: A quarter of new invasive species were spotted by everyday citizen-scientists]

On the other hand, with passive citizen science, researchers can comb through public platforms like Flickr to collect as many occurrences of wildlife (both intentional and unintentional) as possible. “You will get an awful lot of garbage with it, but you’ll get a lot more data. That’s where approaches like this and classification steps in because then you can filter through it,” Edward adds. “It may be not as nice as a perfect campaign where you’re vetting all the individuals doing it, but you still get a lot of data, and it’s still accurate enough that it gives you a benefit.” 

In 2021, there’s not as many people using Flickr as there once was. But the platform provided Edwards and his team a nice starter set to test their program with, because Flickr is focused on photography, and photographers tend to input a lot more details along with their images like tags of what items appear in the photo, and when and where it was taken. Flickr is also connected with the citizen science app iNaturalist, which many users have compared to Pokémon Go but for real animals. 

For this study, Edwards and his team wanted to primarily focus on whether the algorithm could correctly detect if there was a wildlife observation in the photograph. “It doesn’t matter if it’s tagged wrong,” he says. “If we can get that, we can continue later on and work out what it is.” 

Combining Google API with the National Biodiversity Network Atlas

The researchers used the Google Cloud Vision API, which works like a reverse image search that can return keywords or labels that describe the content in a photo. Google previously worked with wildlife conservation organizations in order to create an AI-powered Cloud-based platform that can help classify animal species in images. 

The species names and the images are then matched to about 1,500 animals and plants in the National Biodiversity Network (NBN) Atlas, the largest dataset containing information on species distribution in the UK. This collection centered on the best represented species as well as the most common invasive species. 

“We’re using Google to generate a bunch of tags, then our coding comes in and determines: is this wildlife or is it not?” Edward explains. “We’re taking the tags and the locations and what our algorithm finds is the probability that this is going to be genuine or not genuine.” 

Of course, Edwards and his co-authors checked the algorithm’s work, and sometimes excluded tags so the algorithm doesn’t get too distracted by non-wildlife objects in the photo. They each went through thousands of images manually, but overall, what they verified seemed to line up with what the classifier said. The Google Cloud Vision API itself is not perfect. For example, it could not tell the difference between a 10-spot ladybug and a 22-spot ladybug. It also cannot discern between certain species that look alike, such as cuckoos and sparrowhawks. 

[Related: Birders behold: Cornell’s Merlin app is now a one-stop shop for bird identification]

Also, some weird mysteries had to be solved by human brains. “There’s a rugby team in Swansea called the Ospreys. Swansea is a region where an osprey has never been seen before, and suddenly, we’re getting all these tags for ospreys, and it’s like oh no!” says Edwards. But they can filter out these tags as junk and not a wildlife occurrence, and the algorithm will learn that.

In general, people most liked to take pictures of birds that were active during the daytime, and they had the best matches with the collections in the NBN Atlas. The species that the algorithm had the most trouble identifying were invasive plants like ivies, which usually hug on to houses or walls. Even by altering the tags, the algorithm couldn’t seem to divert its attention from these large structures to the plants that were on these structures. 

After proving that the algorithm could go through public Flickr photos to correctly identify ambient wildlife with accuracies in the 70-80 percent range, Edwards and his team are thinking of expanding its use to other platforms like Twitter and Facebook, but he imagines that it will be a little challenging to carry out as “people are starting to become a lot more data conscious.” 

The other objective he wants to test out is to see this algorithm can be used to track identified species over time. “You can start looking for migration patterns, and project trajectories for them,” Edwards says. “There’s a lot of interest in it, because you can start predicting climate change through where animals are moving before we can see it.” 

 

Win the Holidays with PopSci's Gift Guides

Shopping for, well, anyone? The PopSci team’s holiday gift recommendations mean you’ll never need to buy another last-minute gift card.