As we saw in a previous post, accessing climate projections, like those used in the IPCC's reports, is fairly simple. However, these data cannot be used directly for climate risks assessment. To study local climate evolution and anticipate human, economic, or environmental impacts, preliminary treatments are essential. Often left to the end-user, these steps are frequently misunderstood and can distort results.
Downscaling, regionalization, bias correction... In this article, we will clarify the various treatments that can be applied to climate projections and their purposes.
To keep the article concise, the details and implementation of the mentioned methodologies will not be covered. The advantages and limitations of each method will also not be detailed.
What Are Climate Projections? And Why Do They Need Post-Processing? A Quick Reminder.
A climate projection is a simulation of future weather conditions, for example daily series of temperature, rainfall, wind speed and so on. These simulations are performed by close relatives of classical meteorological models: Global Circulation Models (GCMs).
A climate projection describes a possible state of the atmosphere at a given time. It is crucial to understand that projections are not a forecast! However, by analyzing a sufficient number of accurate simulations, it is possible to gain precious insight into future meteorological conditions: For example what is the most likely temperature during the summer? What is the likelyhood to surpass a given thereshold? What is the distribution of extreme values?
The problem is that raw climate projections are usually not accurate. They suffer from two main issues that generally prevent their direct use:
Limited Spatial Resolution: As global, long-term simulations, GCMs output have limited spatial resolution, typically a few hundred kilometers. A resolution of 10 km or less is necessary to reliably represent certain phenomena, such as land-sea interfaces or precipitation extremes.
Bias: Projections are generally biased, meaning their results contain systematic (in other words: time-independent) errors. For example, a climate model might overestimate winter precipitation or underestimate temperatures on the hottest days.
Post-processing aims to correct these issues as much as possible and deliver reliable and actionnable climatie projections.
How Can We “Enhance” Climate Model Outputs?
Downscaling is a process aimed at enhancing the spatial resolution of projections. It can be applied to a specific point (e.g., a city or weather station) or to a regular grid covering a larger region, in which case it is referred to as regionalization.
How can a climate projection be processed to reveal local details? There are two types of methods: dynamic and statistical.
Dynamic downscaling uses a Regional Climate Model (RCM) to refine the results of the global model. Since the covered area is smaller, the regional model can account for more details (topography, coastlines, albedo, etc.) and provide finer simulations. The regional model is forced at its boundaries by the global model, meaning the global model provides the initial and ongoing large-scale climate data that influences the RCM's simulations. This ensures that the values at these boundaries align with those of the large-scale model, maintaining consistency with the global model's projections outside the regional area.
This operation requires a regional weather model and considerable computing power. As a result dynamic downscalling is almost exclusively performed by research institutes or national meteorological services. At international level, the CORDEX project coordinates this work.
Statistical downscaling uses a statistical model linking the large-scale projection to local values. The statistical model is fitted by comparing past weather data with the values given by the projection over the same period. The assumption at the heart of this approach is that the relationship between large-scale weather parameters and their local values is stable over time.
Compared with the dynamic method, statistical downscaling is simpler to implement and much less computationally intensive.
Some Statistical Downscaling Methods
The difficulty with this approach lies in the choice of method. There are many statistical downscalling methods, and the “right” one depends on various factors: the variable being processed (for example the distribution of temperature and rainfall is inherently different), its intended use (trend detection, extreme value evaluation, multivariable indicator calculation...), as well as the time, resources and expertise available for the project.
Spatial downscaling methods all aim to enhance climate model details. But with so many techniques available, only a skilled climate scientist can select the optimal method for your project.
To illustrate this variety, here are a few frequently used and relatively simple statistical downscaling methods:
Delta Method: The simplest approach, calculating the change in the studied variable from the projection and adding it to local observations. Let say the average temperature observed locally is 15°C during the reference period. If the difference between the simulated mean temperature over the projection period and the simulated mean temperature over the reference period is 2°C, we estimate that the local mean temperature over the projection period will be 15 + 2 = 17°C.
Linear Regression: Establishes a linear relationship between large-scale and local variables.
For example, if Tl is the local temperature and Tg the large-scale temperature, we are looking for a and b so that Tl = a.Tg + b over the reference period then use the same formula over the projection period.
Quantile-Quantile Method: We look for a function that brings the quantiles of the projection distrubtion during the reference period closer to the quantiles of the observations, and then apply the same transformation to future projections. The function used is often linear and can be differentiated by season or month.
Analogues method: the idea is that the same weather conditions on a large scale lead to similar weather conditions on a local scale. These correspondences are established over the reference period and then transposed to the projections.
For example, if over the reference period we observe that the local temperature is 17°C when the large-scale temperature is 15, we will consider that this analogy is still true over the projection period.
Each of these approaches itself has many variations, giving rise to a very large number of methods: CDF-t, Adamont, ARRM, R²D², dOTC, MBCn, MRec, DSClim, and so on.
Bias Correction
When we compare climate projections with actual observed values, we see that climate models tend to be wrong. But always in the same way...
Because they are systematic, climate model errors can be identified and corrected. This is the role of bias-correction.
The principle consists in comparing projections and observed values in the past, and defining a transformation to limit the error between the two. The aim is not, of course, to obtain the same values for the same day - even in the past, a projection remains a simulation - but to obtain similar statistical characteristics (e.g. mean, standard deviation or moments).
Sound familiar? That's normal: bias-correction and downscaling are often similar. So it is not always easy to tell the difference, but the role of the two operations is quite different:
Downscaling aims to refine a projection by providing information, in some form or other, on local features,
Bias-correction aims to correct the errors of a projection without providing any new information.
In other words, some downscaling methods can also provide bias correction, but bias correction cannot generally be considered as downscaling.
Downscaling methods generally fall into one of three categories: subtraction of a constant, multiplication by a constant or quantile-quantile.
Typical Processing Chain
In practice, how does it work? The typical process is as follows:
Production of a global projection by a specialized research institute.
Regionalization using dynamic downscaling, typically performed by another research center.
Statistical downscaling, usually by the end-user.
Bias correction, if not already addressed by the downscaling method.
If all methods are correctly chosen and applied, the climate data is now ready to perform accurate local climate impact studies.
👋If you've come this far, it's probably because you have a project that requires climate projections... and you must have understood that handling this data is tricky. When you call on Callendar, you're guaranteed up-to-date results in line with good scientific practice. Contact us to discuss your project!
Callendar specializes in developing accessible solutions for local climate risk assessment. We have already helped numerous companies and over 230,000 individuals to anticipate the effects of climate change.