Skip to content

Science

Scientists created climate models to better understand how the climate would change in response to changes in the composition of the atmosphere. At their most basic level, these models are based on fundamental laws of physics, simulate proven physical dynamics that govern atmospheric behavior, and reproduce patterns in observed data. Climate models simulate properties and processes across the full Earth system, including land, glaciers, forests, oceans, and the atmosphere. 

Climate models can be a useful tool for real-world decision-making. Before using them in this way, however, it is important to understand how scientists make the models, how they are meant to be used, and their strengths and weaknesses. This section will illuminate these topics, and empower you to use data produced by climate models in an informed and risk-aware manner. 

It’s helpful to remember the difference between climate and weather. The climate is a set of conditions that hold over seasons and years, while weather is made up of short-term atmospheric outcomes that can change within a day. Climate is expressed in ranges and averages, while weather is a precise phenomenon at a specific time in a specific place. Climate models thus offer helpful information about the trajectory and magnitude of climate outcomes for given changes in the atmosphere, but they are not meant to make precise predictions.

Who makes global climate models?

Climate models are created and maintained by scientific institutions around the world. Since models bring together many different forces that produce our complex atmosphere, the models require teams of experts working together. Most commonly, this happens in national laboratories and large research institutes.

CMIP5 Modeling Centers

There are more than two dozen global models that comprise CMIP5. Scientists develop these models at research institutions, universities, and national laboratories around the world.

Most of the institutions that create these models share them (or their results), and the most widely used ones are made publicly available through the Coupled Model Intercomparison Project, or CMIP (pronounced see-mip). This modeling framework was first organized in 1995 by the World Climate Research Programme (WCRP), which continues to coordinate CMIPs for new generations of models. When we compare the output of these models with our own experience of the earth, we can further our understanding of our climate and how human activity affects it. 

What are global climate models?

The models that are commonly referred to as climate models (and sometimes “Earth systems models”) are technically global circulation models (GCMs), because the circulation of energy, carbon, water, and other components of Earth’s systems are what drives the climate. 

Illustration of grid cells in a model

To build a model of Earth’s near-surface atmosphere, scientists divide that atmosphere into boxes called grid cells. The size of a model’s grid cells determines how detailed the results of the model will be. Most GCMs have a grid cell size of approximately 250 km on a side. RCM grid cells commonly range from 10 to 50 km to a side. Credit: Berke Yazicioglu

The models reproduce a range of physical dynamics, including ocean circulation patterns, the annual cycle of the seasons, and the flows of carbon between the land surface and the atmosphere. While weather forecasts focus on detailed local changes over hours and days, climate models project large-scale forces covering continents and spanning decades to centuries.

Every model is distinct for three reasons. 

  1. The earth, its systems, and its weather are imperfectly and inconsistently measured. There are long, detailed, accurate data series for a few locations on Earth, but we have limited data for most of the earth even today. Each modeling team must address this incompleteness, and they may choose different ways to do so. 
  2. Not all climate phenomena are equally well understood. Some things, like the amount of water vapor that air can hold at a given temperature, are well-known and are identical in all models. Others, including ocean currents, phenomena like El Niño and La Niña, and changes in sea ice in relation to temperature changes are subjects of ongoing research and are represented somewhat differently in different models. 
  3. Each research group has a somewhat different area of focus. One research group might focus more effort on its modeling of glaciers, while another group might focus more on the carbon cycle of tropical forests.

The differences in these models are actually a great advantage: Since we don’t know exactly how the system works, it is better to have a range of well-informed attempts to understand it rather than only one. A few decades after scientists created the first climate models, the results confirmed this approach and have continued to do so: The average of all of the models has consistently been the most accurate in predicting how the climate would respond to changes in the atmosphere. In other words, there is no “one best model” but rather a best process for using many models.

What results do climate models produce?

GCMs generate numerical output for weather variables including temperature, rainfall, relative humidity, and other phenomena, for each day of a simulated year in each grid cell. GCMs themselves are not forecasts but rather simulations of outcomes given a set of conditions. Scientists can choose scenarios and run multiple models to simulate multiple years of daily conditions such as temperature and precipitation. Using modeled output, they can then calculate metrics to examine impacts like number of days in a year above 32°C (90°F) or the average temperature of the 10 hottest days of the year that would occur under the specified conditions. From the outset, CMIP models have been run for multiple emissions scenarios. While no single year will perfectly match any model’s projection, the ranges and changes in them that have occurred in the decades since the first set of models was created have closely tracked the aggregate model results of the highest emissions scenario that assumes ongoing fossil fuel use with little-to-no mitigation. The technical term for this emissions scenario is Representative Concentration Pathway 8.5, or RCP 8.5. 

A GCM typically contains enough computer code to fill 18,000 pages of printed text. It takes hundreds of scientists many years to build and improve, and it requires an enormous supercomputer to run. These requirements are partly why GCMs rely on grid cells that are so large: Using smaller cells would increase modeling and computing requirements enormously. 

That means that many GCM grid cells can contain very different places within them. For example, a grid cell along a coast might contain locations relatively far inland, locations right on the sea, and locations so far from shore that sailors there would be unable to spy land. GCMs only produce one value for every measure in such a cell, averaging across the various locations within it. These average results sometimes do not match the experience of any specific location that falls within the cell.

Can the results of GCMs be useful on the scale of a region, city, or town?

GCMs have a strong track record of portraying global and continental phenomena, but all people, animals, plants, and organisms live in specific places. Scientists use a technique called downscaling to see how different changes in the atmosphere would affect local weather. Downscaling can be approached in a few different ways, each of which carries advantages and disadvantages. 

Downscaling GCMs

Scientists can downscale GCM results by focusing on a specific region and breaking the relevant grid cells into smaller cells. The region of focus is limited in either space or time or number of quantities simulated (e.g., daily precipitation in California); otherwise, the computational burden would be unmanageable. 

  1. Dynamical downscaling takes the large-scale results from GCMs and uses these as boundary conditions for smaller-scale weather models. This allows scientists to better represent local topography and smaller-scale (known as mesoscale) atmospheric processes.

    Dynamical downscaling improves the representation of weather dynamics particularly in topographically diverse regions. Since this type of downscaling entails running both global and regional models, it requires enormous computing power.
  1. Statistical downscaling breaks down larger grid cells and takes past data for the many locations inside the grid cell to create a statistical pattern consistent with the past. 

    For example, summer daytime temperatures tend to be cooler right on the water versus a few kilometers inland, while places at higher elevation cool off more at night—statistical downscaling accounts for this. This method requires a large amount of high quality observational data, which is not available for much of the earth. Forecasts generated using statistical downscaling techniques implicitly assume that historical relationships used to train their models will remain unchanged in the future.

What are regional climate models (RCMs)?

Probable Futures uses both GCMs and regional climate models. RCMs are commonly used in dynamical downscaling and are much more granular than GCMs, with grid cells that range from 10 to 50 km to a side. Given the computational burden of running these detailed models, they are broken up into regions that can be run separately (e.g., South Asia, East Asia, Southeast Asia, Australasia, Africa, Europe, South America, Central America, and North America). 

Days over 32°C (90°F) at 1°C of warming

Number of days over 32°C (90°F)
  • 0
  • 1-7
  • 8-30
  • 31-90
  • 91-180
  • 181-365

These two maps depict the difference in data resolution between General Circulation Models (GCMs), and Regional Climate Models (RCMs). Most GCMs have a grid cell size of approximately 250km on a side. RCMs use GCM data as an input and then downscale that data using regionally-specific dynamics. This results in model output with a higher resolution. RCM grid cells commonly range from 10 to 50km on a side. Data source: CMIP5, Cordex-Core, REMO2015 and RegCM4. Processed by Woodwell Climate Research Center.

Can the results of RCMs be useful on a global scale?

RCMs can be used effectively with GCMs. Regional models can take multiple GCM simulations of large-scale climate properties to generate finer-scale local outcomes. 

While most GCMs were created using common standards through the CMIP process, RCMs were not. Research groups created RCMs and sometimes chose different GCMs to drive them, generated different outputs, and used different resolutions. The World Climate Research Programme set out to coordinate inconsistent RCM output. The resulting Coordinated Regional Climate Downscaling Experiment (CORDEX) was made public in late 2019. Since regional climate modeling is so computationally intensive, the minimum number of GCMs required for downscaling per RCM was set to three, chosen to represent a range of different climate sensitivities exhibited by the full range of GCMs.

Which downscaling approach is better for climate modeling?

Downscaling is an active area of research, and different methods are being developed and used by different teams of scientists. Which approach is best depends on the intended use of the model results, the availability of high-quality historical observation data from weather stations, the time horizon of interest, and budget. 

Many factors have to align for GCM downscaling to paint an accurate picture of climate outcomes. Consider a group interested in one specific geographic area with a long history of accurate and complete weather observation. If researchers have access to vast amounts of computing power, they may combine dynamical and statistical downscaling of many different GCMs over many different scenarios. This strategy is particularly attractive for assessing the likelihood of extreme events. 

Unfortunately, those conditions exist only in a very small number of places (e.g., California). Historical weather records are most numerous and reliable in densely populated sections of high-income countries. Data is less available in less-populated and less-developed places, and financial, scientific, and computing resources are fewer. It is thus much harder to downscale GCMs with any confidence in the results in most locations around the world. This is why RCMs were developed.

What downscaling approach does Probable Futures use and why?

Our goal is to provide high-quality climate scenarios at different levels of warming for every place on every populated continent. With this in mind, our science partner, the Woodwell Climate Research Center (Woodwell), advised using the downscaled output from the CORDEX-CORE framework, in which two regional climate models downscale three GCMs each, across nine domains, to cover almost all of the populated world. The Arctic, Antarctica, and some Pacific Islands are outside of this simulated data. 

What should I know about the strengths and limitations of climate models?

  1. No single location has its own climate. Scientists designed climate models to project global or continental climate trends and changes. Advances including RCMs have helped to increase their resolution, but forecasting weather at the level of a specific structure or small area under a substantially different global climate not only requires myriad extra unproven assumptions, but is unlikely to be useful in strategic decisions because every structure, ecosystem, community, etc., is connected to others nearby. It is usually best to first consider model results for large areas and then zoom in to regions.
  1. Rising greenhouse gases impact the accuracy of models. Scientists have access to a limited amount of data to build their models, and the overwhelming majority of this data is recent. For example, consistent measures of temperature began in the mid-1800s, and satellites that collect global information first began orbiting the earth in the 1970s. As a result, climate models are particularly attuned to climatic conditions of the recent past. It is encouraging that climate models have been accurate in their predictions over the last forty years. However, the greater the concentration of greenhouse gases in the atmosphere, the further the climate gets from those foundational conditions. 

    In other words, the higher the warming scenario, the more uncertainty there is in the results. That is why the highest warming scenario presented on our platform is 3°C, even though we risk surpassing that level of warming without a dramatic reduction in emissions. We strongly recommend that when looking at results for 2.5°C and 3.0°C of warming, you consider the risk that the results in the models are too mild. There is inherent conservatism in climate modeling, and society faces asymmetric risks: If the models turn out to be too aggressive, it will be a nice surprise; if they turn out to be conservative, there will be much more suffering. We need to prepare for—and try to avoid—outcomes with a low probability but very high costs.
  1. They don’t accurately model rarer phenomena. Infrequent and complex climate events are hard to model accurately. If an event happens infrequently, scientists have limited observations to work from in creating their models. Complexity poses a related challenge. While the physical laws that govern the weather are well-understood, it is impossible to model every molecule. Indeed, even if scientists could model every molecule, there are still forms of complexity and randomness that influence outcomes (you may have heard of “the butterfly effect” wherein a small perturbation such as the flap of a butterfly’s wings sets off a chain reaction that leads to a much bigger result somewhere else in the world). That is why, for example, hurricanes and typhoons are a challenge for the modeling community, while heat is much easier to model. Scientists can draw on decades of temperature observations and have been able to use ice and sediment records to reconstruct millions of years of historical temperature data, but we only know about the hurricanes and typhoons that have been recorded, and even those are infrequent in any given location. Researchers exploring multiple approaches and methodologies for complex phenomena like storms and shifts in gulf streams will have more varied results.
  1. They don’t account for all biotic feedbacks. Biotic feedbacks are looped responses between terrestrial stores of carbon and the atmosphere. For example, the release of methane from organic matter in thawing permafrost and the release of CO2 from burning forests are both caused by warming and in turn cause more warming. 

    It is crucial to understand that the models included in CMIP do not include large-scale atmospheric change from biotic sources. They were designed to answer questions about how human-induced atmospheric change would affect the climate. The lack of feedback loops is a critical shortcoming of the models. 
  1. They cannot project the Hothouse Earth tipping point. The lack of these feedbacks in the models causes them to stop warming a few decades after humans stop emitting carbon. The current ensemble of models effectively assumes that we will reach a global equilibrium temperature at any given level of carbon in the atmosphere. We now know that this is unlikely even at current levels of warming and extremely unlikely at temperatures that may not sound that much warmer than now. The inability of these models to project or estimate the probability of human emissions triggering natural tipping points like runaway permafrost thaw, forest fires, and glacial melting, which could propel the planet into Hothouse Earth, is likely their biggest shortcoming.

    Although biotic feedbacks are not in the historical data on which scientists have built climate models, they are a critical area of active scientific research, and one in which the Woodwell Climate Research Center is a recognized leader. The next round of models (which are now being released for testing and comparison) will include some biotic feedbacks.

Probable Futures believes that humanity’s understanding of our planet and its systems, and our ability to understand how we can influence those systems, represent perhaps the greatest human achievement. Some of this understanding requires deep expertise, but we have found that everyone can grasp and relate to the most important features of our climate and climate change when given the tools. While supercomputers are necessary to run giant simulations of the earth, a much simpler tool can convey those results meaningfully and usefully: maps. If you can read a map, you can learn most of what you need to know about climate change. We have worked to make our maps clear and easy to use. We have also tried to make them beautiful, because seeing the world in this way is a wonder. We find that feeling wonder as you explore even difficult probable futures is helpful.

This section outlines key aspects of our process and provides you with guidance on how the data and the maps on this platform should (and should not) be used. 

What is the source of the data in Probable Futures’s maps?

All data on the interactive maps are sourced from the CORDEX-CORE framework, a standardization for regional climate model output. We use the REMO2015 and REGCM4 regional climate models, the two RCMs that have participated in the CORDEX-CORE effort, which downscale data from multiple GCMs in the CMIP5 ensemble of global climate models. 

Why were these climate measures chosen for the maps?

We chose these measures of climate phenomena for their potential relevance to people and the systems we rely on, both at global and regional scales. For example, we include several different datasets for each measure of climate phenomena (e.g., Days over 32, 35, and 38°C). We do this so that you can explore thresholds that are important to your region, industry, or interests. For example, a particular temperature threshold may matter more in eastern China than in southern Australia. Similarly, a particular measure might have meaning for a utility manager, but not for a coffee farmer in the same region.

Why are the Probable Futures maps organized around warming scenarios instead of specific dates? 

The physical impacts of climate change, such as extreme heat, drought, and precipitation, are directly linked to global average temperature, not dates or timelines. In other words, the dates that the earth may reach certain warming scenarios are not predetermined but depend on our emissions.

You may see other maps that project specific dates. The people who created the data for those maps had to forecast emissions scenarios—a specific, complex combination of human activities. Our approach has fewer assumptions. In addition, we believe that people may interpret maps based on dates as predictions that are final, which we can’t do anything about. On the other hand, using warming scenarios communicates that the scale and timing of climate change is within our control. In addition, using different models at the same warming level removes sources of bias that occur, forcing models to line up in time instead of in atmospheric conditions.

How might the data on this platform differ from other climate data I’ve seen? 

Whether you are looking at climate projections on a publicly available map or one commissioned from a climate data service provider, it is important to understand who is producing the data, what models they are using, and what methodological choices they are making. These factors will vary depending on the intended use and the skill sets of those analyzing and presenting the data. 

In general, temperature data will be the most consistent of all climate phenomena across models because heat is fundamental. Every part of our climate is caused or influenced by it. As such, scientists design climate models with temperature at their core. That is why the Probable Futures platform begins with heat and presents maps of many different temperature phenomena.

How do you test the results on these maps and compare them to observed outcomes?

Our colleagues at Woodwell analyze climate model output that simulate possible outcomes. These simulations produce temperature, relative humidity, rainfall, winds, atmospheric pressure, and other aspects of local weather. For every simulated day in every grid cell we can look at high and low temperatures, quantity of precipitation, etc. The richness of the data allows us both to investigate what conditions would be like under different climate scenarios and to check to see how “good” the models are. Scientists refer to the ability of the models to produce realistic results as “model skill.” 

Many researchers have worked with these models in recent years, so there are academic studies of model skill, but we wanted to see if the models “felt right” to us. Since the Probable Futures team is spread out, and became even more so during the pandemic, we looked at places we knew well. Early in 2021, a key member of our team named Isabelle decided to move to Portland, Oregon, a city in the Northwest of the USA. She had never even visited there. No one else on the team knew it well, but we all thought of it as pretty mild. Together we looked at the model results for the grid cell that included Portland. The below slide show provides a good case study in the model results and in the way to apply them.

How is Probable Futures’s data and methodology reviewed and vetted? 

The data and educational content we publish on our maps go through a rigorous review and validation process. 

We have a thorough internal quality control process that includes review cycles within Woodwell and consultation with other leading climate scientists. The internal review takes place early in the data development process, after making and testing initial methodology choices. The external review takes place when the data are in final draft form. 

Note that we don’t intend for the review process to exactly replicate an academic publication peer review. We ask external reviewers to assess the dataset from a holistic perspective, confirming that chosen methodologies are used in the scientific community, that assumptions made are defensible, and that documentation is sufficient for replication. We also ask reviewers to pose questions and make recommendations more broadly about the methodologies and associated documentation. We are grateful to many scientists who have volunteered their time and expertise. Probable Futures will continue to evolve and update.

What are some best practices when applying this data in the real world?

  1. Treat the data as a tool. It is important that anyone using Probable Futures maps understands that while the maps offer specific values, the results represent a range of possible outcomes within a computer program that models the earth. In other words, you should not take results as a specific prediction, but rather as a tool that can give a sense of the scale and likelihood of certain climate outcomes and how that scale and likelihood change as the atmosphere changes.
  1. Investigate how the models were made. We encourage anyone producing or applying climate data to both make efforts to understand what models and approaches to downscaling are being used, as well as assumptions within those models, and to disclose those details to relevant audiences and stakeholders. 
  1. Disclose how you are using the data. Climate disclosure frameworks such as CDP or the Task Force on Climate-related Financial Disclosures (TCFD) can be helpful guides and are rapidly advancing to provide standards for physical climate risk disclosure. We encourage such standards and are hopeful that Probable Futures can be a helpful resource. Woodwell and Wellington Management have also recently published frameworks for disclosure as a complement to these sources.
  1. Shared information makes us stronger. The open scientific community has thousands of dedicated, skilled practitioners collaborating and sharing their results. We believe that communities from the local to global can better address climate change with shared information than if only a small number of people have “their own” climate information.

Which climate phenomena will Probable Futures publish via the interactive maps in the future and why?

Heat, precipitation, and drought are the primary phenomena we will portray on our platform. Why just these three? Probable Futures aims to serve as a resource for well-established, well-understood, and well-vetted climate science. The specific climate phenomena we choose to include on our platform must meet rigorous scientific standards and be available at a near-global scale. Heat, drought, and precipitation are all first-order, fundamental climate phenomena that have physics that are well understood, and that can be modeled on a global scale relatively accurately. In addition, they are relevant everywhere.

Other climate phenomena, such as wildfires, flooding, and storms, are second-order climate impacts because they are caused by combinations of heat, drought, and precipitation. We will reference and show second-order phenomena in some places on the platform. In addition, you will likely see that you can identify (for example) places with large and emerging fire problems on heat and drought maps. 

Furthermore, human activity plays a significant role in the occurrence of some climate phenomena like wildfires and flooding. For example, humans often ignite wildfires and live in wildfire-prone areas called the wildland urban interface (WUI), and flooding infrastructure in a given location can determine whether that location floods—or not. These factors make modeling much more complex, and theoretical, especially on a global scale. 

Recommended resources for those who want to learn more

  • CORDEX CORE Simulation Framework   /   A deep dive on the regional climate framework employed in Probable Futures maps.
  • McKinsey climate risk and response report   /   Primer on the fundamentals of climate risk with illustrative in-depth case studies.
  • Trajectories of the Earth System in the Anthropocene   /   Planetary context for climate risk. This academic publication explores the risk of how near-term warming could create the conditions that prevent stabilization of the climate, ultimately leading to Hothouse Earth.
  • Q&A on how climate models work   /   Primer from Carbon Brief on global climate models that includes other links to climate model basic education and resources
  • Business risk and the emergence of climate analytics   /  This paper describes some of the limitations of climate models when applying them in a business context.
  • TCFD   /   The 2017 TCFD recommendations report outlines a framework for reporting climate-related financial information.
  • P-ROCC 2.0   /   Wellington Management and Woodwell Climate Research Center report, extending their 2019 framework designed to help companies disclose the potential effects of physical risks of climate change on their business. The new framework outlines how companies can share location data that will enhance transparency and help investors make more informed investment decisions.