Google Earth Engine is taking its closest look yet at how landscapes are changing

The Dynamic World project can provide details on how land use is affected by climate change and human activity.
Iceland topography from above
New AI from Google would be able to take this image and break it down by land cover type. Robert Bye / Unsplash

Today, Google Earth is launching a project called Dynamic World, a new endeavor that creates maps that come paired with a novel deep learning AI model. It’s able to classify land cover by type (water, urban, forest, crops) at a resolution of 10 meters, or 32 feet. That means each pixel covers about 10 meters of land. For comparison, previous state-of-the-art technology had a 100 meter resolution (320 feet). 

Dynamic World is a way for people to observe from space the myriad ways land cover changes on Earth, whether it’s from natural seasonal changes, climate change-exacerbated storms and disasters, or long-term changes that are caused by human activity such as clearing of wild habitats for crops, cattle, or logging. Experts and researchers can use this new project to understand how land cover changes naturally, and flag when some unexpected changes appear to be taking place. 

Users can go to Google’s Dynamic World website to peruse through the various datasets and see what the marked maps look like. For example, one map shows how the volume of water and greenery blooms and recedes in Botswana’s Okavango Delta from the rainy season to the dry season.

The map model, which draws satellite imagery from the European Space Agency’s Sentinel-2, can update its data stream for global land cover monitoring every 2 to 5 days. In fact, around 12 terabytes of data comes from the Sentinel-2 satellite every day. From there, it goes into Google’s data centers and Google Earth Engine, a cloud platform built for organizing and relaying Earth observations and environmental analytics. The Earth Engine is connected to tens of thousands of computers that process the information and derive insights with computer models before it becomes available in the Earth Engine Data Catalog. 

In order to be able to automatically label how the land represented in all those satellite images is used, Google needed the help of artificial intelligence. The land cover labeling AI they developed as part of this project was trained on 5 billion pixels labeled by human experts (and some non-experts). In the training data, they identified pixels in Sentinel-2 images and what land cover class they were (water, tree, grass, flooded vegetation, built-up areas like cities, crops, bare ground, shrub, snow). Then they would present the model an image that wasn’t in the training set, and ask it to classify the land cover types. Not only are there color differences to distinguish the different land types on the maps, but there are shading differences too. That’s because the pixels also convey probability. The brighter the color is, the more confident the model is in its classification accuracy. This creates a textural effect when the topography goes from land to forest, or land to water. 

[Related: Google Street View just unveiled its new camera—and it looks like an owl]

A detailed description of their dataset has been published in the journal Nature Scientific Data

“We are making it all available under a free and open license,” Rebecca Moore, director of Google Earth, said in a press call ahead of the announcement. “The datasets are free and open. The AI model is open source.” 

About 10 years ago, Google and the World Resources Institute collaborated on Global Forest Watch, a project aimed at monitoring forest cover to protect these areas while looking for changes from illegal activities such as logging or mining. Now, they’re trying to expand their efforts beyond just protecting and observing one land cover type. 

The idea is to help make sense of the available data out there. “We’ve heard from a number of governments, [and] researchers that they are committed to taking action, but they are lacking environmental monitoring information about what’s happening on the ground so they can create science-based data-informed policies, track the results of their actions, [and] communicate with stakeholders,” Moore said. “The irony isn’t that there isn’t a ton of data. But they’re thirsty for insights. They’re looking for actionable guidance to support the decisions they need to make. And dealing with the raw data in many cases is overwhelming.” 

Google thinks that Dynamic World’s role in this is that it can fill the data gap around land use and land cover, and describe where fundamental ecosystems like forests, water resources, agriculture, urban development are located. This type of information, Moore said, can be useful for guiding decisions about sustainable management of scarce natural resources, food, and water. It can also help with questions around how to manage disaster resilience, how to deal with sea-level rise, where to create protected areas, where to put in dams, and what tradeoffs might need to happen, just to name a few examples.