Computer Flood Models Could Save Cities, But They’re Not There Yet
Computer models of flooding can already show the weak spots in a city's defenses after a storm hits. The bigger problem is doing it in advance.
When Hurricane Sandy made landfall this time last year, it trampled through the small city of Hoboken, New Jersey, 70 percent of which is in a flood plain. You’d think anyone could predict what happened next: 90 percent of the city lost power; people were trapped in their homes for days, with downed power lines preventing them from leaving even after the storm subsided; and the mayor, appearing on Anderson Cooper 360, soon made a plea for the National Guard to intervene.
I watched the city flood from safety. Union City, a nearby town, sits on a hill overlooking Hoboken; out in the distance, across the Hudson, you can spy Manhattan. From a cliff and five floors up, you’d think the view of the city below filling with water would be dramatic, but the truth is it’s still abstract; you see the rain falling, and as the storm keeps coming, the water rising above streets and along buildings. You don’t see what’s really happening: that the water is pushed inside from exposed parts of the city.
Alan Blumberg, director of Hoboken’s Davidson Laboratory at the Stevens Institute of Technology, remedied that lack of perspective. Blumberg is a stout, affable man with a knack for the clever metaphor. When we first met in Hoboken last week for a chat about the legacy of Sandy, he spread his arm toward the Hudson and said, “Welcome to my laboratory.”
He tracks storms through sensors and animates the results. Here’s Hoboken in the days before and after Sandy:
You can see the ebb and flow of the tide, the slowly rising water, and, maybe most importantly, the exposed and low-lying areas where water is gathering. Data like that could be a boon to urban planners and other officials hoping to storm-proof their cities from the next inevitable storm. If not giving officials a blueprint to create a stormwall against unwelcome weather, as the New York government has suggested, a city will know which citizens are most likely to be hurt worst by a storm. And they could, maybe, even mitigate the damage by prognosticating the effects ahead of time.
Historically, there are three ways computer models like this help researchers. First, there’s the retrospective, which is what Blumberg did with his map of Hoboken. Researchers can model what happened after the fact, giving a “coherent picture” of the event, says Rick Luettich, director of the University of North Carolina’s Institute of Marine Sciences. That means researchers can prep a city for the next storm.
Second, models can be used, without a real storm at all, to show where the most vulnerable areas are. Throw in the statistics for a once-in-100-year storm, and you get at least a general idea of which areas are going to be hit hardest for most storms.
The third, and potentially most valuable use for the models, is determining exactly what will happen when a _specific _storm on the horizon reaches land. Say you’re a researcher who gets word from the National Weather Service about a storm predicted to hit the coast in five days. Using the characteristics of the storm and the city, you could know in advance which areas are going to be hit hardest by that storm, and also how much of a beating the city will take as a whole. You could know which areas to evacuate–maybe, with the right data, you could even figure out if a specific light post or tree would get knocked over. But if researchers facing incoming storms are still facing an uphill battle to make those kinds of models accurate.
Deciding when and where a storm will hit is still a woefully imprecise science.
Storm forecasting is still a messy field, and without meterologists being able to predict exactly where a stom is going to hit, it’s hard for computer researchers to determine what the storm will do to a city when it hits. Even as more money and better data-gathering techniques are employed to track and predict hurricanes, deciding when and where a storm will hit is still a woefully imprecise science. Luettich points out that in a city like, say, New Orleans, officials would want to know the impact of a storm days–or even weeks–in advance. There are plans to be made: evacuation buses to be lined up, people to be prepped. But although storms can be predicted with more certainty the closer they move to the mainland, at five days out, it’s still only a best guess. “Truthfully, at five days out, you could go from a situation where you had major impacts to a situation where you had no impact at all,” Luettich says.
That’s not what disaster managers or flood-modelers are hoping to hear. “The emergency response folks and the managers and all the decision-makers truly have to be very good at their jobs, and a bit prophetic, to consistently make the right calls,” Luettich says. Worse, the storm predictions aren’t consistent across agencies. A lab in Florida might come up with something that varies considerably from what the National Oceanic and Atmospheric Administration is forecasting, so researchers working with models have to account for a wide range of potential outcomes.
Better flood predictions can come up from a mix of those weather forecasts. “But you don’t want to just average them,” Blumberg, whose models have been used for all three of those main purposes, says. The best predictions require a weighted “ensemble”–Blumberg has worked his way up to combining eight forecasts–with different predictions given probabalistic importance in the model. It’s an improvement, definitely, but still reliant on some potentially off forecasts. The best researchers can give is a range of potential outcomes–and as both weather forecasting and computer modeling technology get better, we’ll slim that range. The Cone of Uncertainty, which shows how our predictions get better as we near the end of something, like a football game or the landfall of a hurricane, will be pushed back farther and farther. We’re just not quite there yet.
“Have you seen The Day After Tomorrow?” Blumberg asked when I visited the Stevens Institute.
Yes, I told him.
“That’s what I’m trying to do,” he said, meaning with his computer animations. He joked about eventually getting an Academy Award for special effects.
He took me up to the Immersion Lab, a conference room-like space with gigantic, touch-screen computer monitors, then pulled up the animation of Hoboken during Sandy.
So I watched the city flood again. It’s not flawless; it still looks very digital. But watching the water rush in from above felt better, or least more empowering, than doing it through a rain-flecked window.