
A snapshot from a model simulation of CO2 traveling through Earth’s atmosphere over the course of a year. IMAGE: NASA’S GODDARD SPACE FLIGHT CENTER
This spring, the DOE unveiled a new climate model designed to achieve a number of mission priorities for the agency by not only predicting how climate will change over time–but also determining how those changes might burden the nation’s energy infrastructure. Experts from Berkeley Lab’s Climate and Ecosystem Sciences Division co-led efforts to improve the land component of this new Energy Exascale Earth System Model (E3SM), a product that was in the making for nearly four years by scientists across eight national labs. The collaborators tailor made E3SM to run on the world-leading supercomputers of today and tomorrow with a goal of substantially improving climate predictions by simulating factors that go in to the myriad impacts on climate changes in ultra-fine detail.
For that to happen, however, there needs to be a shift in a fundamental component not just of E3SM, but of each and every climate model that exists. Bill Collins, director of Berkeley Lab’s Climate and Ecosystem Sciences Division, and CESD research scientist Dan Feldman, said as much in an article they authored with Brian Soden of the Rosenstiel School of Marine and Atmospheric Science at the University of Miami, Florida, for the July 27 issue of Science. They argue that it’s time for climate models, which have been in use since the 1960s to adopt a consistent method for calculating radiative forcing by CO2.
Radiative forcing is a quantification of the extent to which human activities and natural events affect the flow of energy into and out of the Earth’s climate system. Global climate models are mathematical representations of this system, which based on the laws of physics represent fundamental physical processes in the atmosphere, ocean, land surface and cryosphere across different time and space scales, and they describe the Earth system response to the increased energy from rising CO2.
In order to calculate CO2 radiative forcing rigorously, a model would have to perform hundreds of calculations at each grid box for each model time-step, and this would inhibit the utility of these models to undertake diverse experiments to gain an understanding of the Earth’s response to rising CO2. To avoid this problem, models use a parameterization of this radiative forcing, and only perform a few hundred calculations instead. For at least 25 years, the authors argue, there’s been a need for the climate model community to synchronize how they implement this parameterization
“There are about 30 different large climate models. Each one operates using its own parameterization of radiative forcing, which is essentially a look-up table that tells the model how to calculate CO2 radiative forcing for a given set of atmospheric conditions,” Feldman says. “Inconsistencies in these parameterizations create a lot of uncertainties in model projections of climate change.”
The gravity of the problem was brought to light initially 25 years ago when, in the first comprehensive assessment of the calculation of radiative forcing (Cess et al., 1993), the study found that when CO2 was doubled, there was a wide spread in radiative forcing among 15 different global climate models. In their paper for Science, Soden, Collins, and Feldman blame intermodel differences in the parameterization of infrared absorption by CO2 for the discrepancies. Thirteen years later, another more extensive inter-model comparison of radiative forcing using a newer generation of climate models arrived at a similar conclusion (Collins et al., 2006).
“There are about 30 different large climate models. Each one operates using its own parameterization of radiative forcing, which is essentially a look-up table that tells the model how to calculate CO2 radiative forcing for a given set of atmospheric conditions,” Feldman says. “Inconsistencies in these parameterizations create a lot of uncertainties in model projections of climate change.”
This has the potential to have grave effects, according to their Science article, wherein the authors write, “The contributions of erroneous CO2 forcing to the persistent spread in climate projects undermines the utility of these models to answer fundamental questions of central societal importance.”
They go on to advocate for the adoption of two immediate solutions. First, the authors insist that radiative forcing be routinely computed and reported for models that participate in Coupled Model Intercomparison Projects (CMIP), a series of coordinated experiments performed in support of the International Panel on Climate Change assessments. Second, they call for fewer parameterizations in global computer models across the board.
Feldman says that while he and his co-authors are not the first to advocate for similar types of action on this topic, they’re thrilled for the opportunity to make the case before the readers of Science.
“It’s not so often in science that it is clear what needs to be done, but this is one of those cases,” Feldman says. “To have truly reliable climate predictions, it’s critical that climate modelers all be working with roughly the same parameters.
“Once we do, we can move on to the larger issues climate scientists are hoping to resolve: like how clouds respond to climate change; how ecosystems are affected; or how these changes feed back on the Earth’s climate. Only when we reduce the uncertainties in climate models posed by disparate parameterization will we be able to take full advantage of the simulation capabilities of super-sophisticated climate models like E3SM.”