An Introduction To The Science Behind Climateprediction.net
From Unofficial BOINC Wiki
The aim of the Climateprediction.net (CPDN) project is to investigate the approximations that have to be made in state-of-the-art climate models (see "Modelling The Climate). By running the model thousands of times (a 'large Ensemble') we hope to find out how the model responds to slight tweaks to these approximations - slight enough to not make the approximations any less realistic. This will allow us to improve our understanding of how sensitive our models are to small changes and also to things like changes in carbon dioxide and the Sulphur Cycle. This will allow us to explore how climate may change in the next century under a wide range of different Scenarios. In the past estimates of climate change have had to be made using one or, at best, a very small Ensemble (tens rather than thousands!) of model runs. By using your computers, we will be able to improve our understanding of, and confidence in, climate change predictions more than would ever be possible using the supercomputers currently available to scientists.
The Climateprediction.net (CPDN) experiment should help to "improve methods to quantify uncertainties of climate projections and Scenarios, including long-term Ensemble simulations using complex models", identified by the Intergovernmental Panel on Climate Change (IPCC) in 2001 as a high priority. Hopefully, the experiment will give decision makers a better scientific basis for addressing one of the biggest potential global problems of the 21st century. See Why Global Warming is considered to be one of the biggest potential global problems for more on this.
As shown in the graph above, the various models have a fairly wide distribution of results over time. For each curve, on the far right, there is a bar showing the final temperature range for the corresponding model version. As you can see and would expect, the further into the future the model is extended, the wider the variances between them. Roughly half of the variation depends on the future Forcing Scenario rather than uncertainties in the model. Any reduction in those variations whether from better scenarios or improvements in the models are wanted. Climateprediction.net (CPDN) is working on model uncertainties not the Scenarios.
The crux of the problem is that we can run the model and see that x% of the models warm y degrees in response to z Forcings, but how do we know x% is a good representation of the probability of that happening in the real world? The answer is that we are uncertain about this and want to improve the level of confidence that can be achieved. Some models will be good and some poor at producing past climate when given past Forcings and initial conditions (a Hindcast). It does make sense to trust the models that do well at recreating the past more than those that do poorly. Therefore models that do poorly will be downweighted.
To try to be able to develop better probabilistic forecasts, the project uses a Climate Model developed by the Hadley Centre.
 What does the Hadley Centre do?
The Hadley Centre for Climate Prediction and Research, part of the Met Office, is the UK's main climate change research centre. The main aims of the Hadley Centre are:
To understand physical, chemical and biological processes within the climate system and develop state-of-the-art climate models which represent them; To use climate models to simulate global and regional climate variability and change over the last 100 years and to predict changes over the next 100 years; To monitor global and national climate variability and change; To attribute recent changes in climate to specific factors; To understand, with the aim of predicting, the natural interannual to decadal variability of climate.
The Hadley Centre employs around 100 staff. Most of its funding comes from contracts with the Department for Environment, Food and Rural Affairs (Defra), other United Kingdom Government departments and the European commission.
The Hadley Centre has provided the base models for the climateprediction.net experiment (specifically atmospheric and ocean general circulation models), as well as expert opinion on aspects of model uncertainty. Through the Qunatifying Uncertainty in Model Prediction (QUMP) project, the Hadley Centre have developed a sister project which complements climateprediction.net by providing another perturbed-physics ensemble that, though of much smaller size, is considerably less data constrained.
One important aspect of model development concerns the ways in which the large scale effects of small scale processes are treated by the model (such as how waves caused by wind blowing over a rough mountain range affect the large scales that the model can resolve). In doing this the model incorporates Parameters. These are settings where the exact value that should be used is not known with certainty. Trying different combinations of Parameters allows lots of different models to be generated.
Some of these Parameters are well constrained by observations but others are not. The Hadley Centre works on constraining the Parameters as much as possible. They can also work on improving the parameterisation scheme by finding a different scheme that works better.
Improving the models, using the latest supercomputers as they become available. This allows model resolution to be improved, so that modellers can explicitly resolve smaller and smaller scales. In the model used in climateprediction.net the model only represents New Zealand, for example, by three grid boxes, leading to quite a coarse resolution of the rough terrain of the Southern Alps. As models develop, the resolution gets higher, enabling a more detailed representation of small scale features. This is important because one of the properties of the climate system is that small-scale effects can significantly affect larger climatic scales. Improving the resolution of the model allows scientists to explicitly resolve some of these smaller scale effects, improving the overall climate simulation.
 How does Climateprediction.net (CPDN) differ from what the Hadley Centre does?
Climateprediction.net (CPDN) does not develop models, we only use them. There is considerable overlap though. The Hadley Centre has its own sister project to what Climateprediction.net (CPDN) does called QUMP (Quantifying Uncertainties in Model Parameters). Their research is slightly different in that they vary more Parameters (29 vs 21) but manage with just 128 models instead of the hundreds of thousands that Climateprediction.net (CPDN) uses. Both projects aim at "probabilistic forcasting" although they use dfferent methodologies to make best use of the information available to them.
The Climateprediction.net (CPDN) approach is to carry out large numbers of model runs in which the Parameters are varied within their current range of uncertainty. Models that fail to model past climate successfully will be rejected or downweighted and the remainder will be used to study future climate.
Climateprediction.net (CPDN) runs hundreds of thousands of state-of-the-art climate models with slightly different physics (achieved by changing the Parameters) in order to represent the whole range of uncertainties in all the parameterizations. This technique, known as Ensemble forecasting, requires an enormous amount of computing power, far beyond the currently available resources of cutting-edge supercomputers. The only practical solution is to appeal to distributed computing which combines the power of thousands of ordinary computers, each computer tackling one small but key part of the global problem.
 What Does This Mean For The Participant?
You, a Participant, will download one of the models, with a set of Parameters and you will run the model through a period of time. Each of the Timesteps shows how the weather is predicted to be based on the prior conditions and the natural evolution of the model. Quite simply, the weather changes ...
When you complete the model, the data generated is returned to the scientists and they look at your data, and the data returned by the other Participants from their models and they can see if the models make sense.
If the models do not make sense, it could be for any number of reasons. For example:
- The model was unstable.
- The model predicted impossible weather patterns.
- Your calculations were wrong.
For help understanding how your work is getting along see BOINC Documentation of how your work is progressing.
 No set of calculations is going to be right, so how does this help?
By running many of these models, the scientists can evaluate what Parameters are important, and which Parameters appear to make little difference. Unfortunately it is a whole lot more complicated than that because a Parameter may appear to make little difference in one model, but change a different Parameter and it may suddenly make a great deal of difference. This is a result of non-linearities in the response. While climate scientists believed there were non-linearities in the response, Climateprediction.net (CPDN) has for the first time been able to show that this is the case.
The aim isn't to get one 'correct' forecast, it is to gain better probability information. For this Ensembles do help.
For more information see Nature Journal - First CPDN Results or the Explanation of the Nature Journal - First CPDN Results.
 Things that are NOT being evaluated include:
- IF we add factor "x" does the model predict next weeks weather better?
It is known that the weather is chaotic. It is impossible to try to predict weather more than two weeks into the future. To do so would require impossibly accurate information about the current state (or initial conditions) of the atmosphere - so accurate that a butterfly wing flap could change the way the weather would evolve.
So should the model make great effort to use as accurate initial conditions as possible? No it isn't worth the effort. It may help improve the weather forecast for the next week but it can be shown that the initial conditions make little difference to the climate that develops in models. Such accuracy would mean much smaller and many more cells which would be computationally very expensive.
The model doesn't try. There are simplifying assumptions like the year has 12 months of 30 days each.
 Things that are being evaluated include:
- If we remove factor "y" does the model become unstable?
- If we keep Parameter "z" constant does this change the probability distribution of climate sensitivity, other global averages and/or regional effects?
- Does Parameter combination a, b, and c cause unrealistic regional climate patterns?
 Why Do We Run So Many Models?
People who are used to sampling (Quality control people, auditors etc) may be aware of a strong diminishing return for larger sample sizes. However the nature of the data is multidimensional and there are non-linearities as mentioned earlier. There will be diminishing returns with larger sample sizes but this will only be after very large sample sizes have been reached.
Also, the aim of probabilistic forecasts is not easy. (More about this is in the Ensemble article).
If all the models "blow up" when we set certain Parameters combinations, perhaps there are boundaries on the allowable values. Such hard boundaries may already have been found by the Hadley Centre. However there may be softer boundaries where the model just starts acting more unrealistically. These boundaries may be harder to find and need exploration in many different dimensional directions. (If there are 21 Parameters, the parameter space can be considered to have 21 dimensions. Fortunately you don't need to be able to visualise this.)
The above is probably poorly explained. I think there is more interest in finding interesting areas than finding boundaries.
 Also See
- Details of the various stages of the experimental design of the project are available on the Strategy page and the
- The current releases are the Transient Coupled Model which is Experiment 2 and a re-release of the Slab Model which is experiment 1 on the Strategy page.
- More on the Project Background here.
- More on the Climateprediction.net Models.
- A group of models is called an Ensemble and that article contains more detail about how such Ensembles help and why it is still problematic.
- Some of the first results are on the website.
- The following are articles, events and webcasts relating to climateprediction.net. For press coverage, please see the In The News page.
- Interview with Professor Bob Spicer from the Open University, describing the project (with thanks to The Technology Channel).
- The climateprediction.net team in action at the Royal Society Summer Science Exhibition, 2005.
- An article about BOINC and climateprediction.net by students at Cornell University, USA.
- For much more in depth information on the project you could try the
- Nature First Results paper or the Explanation of the Nature Journal - First CPDN Results.
- Other Scientific Papers ** The Design Papers.
- There are also Public Presentations here, though with the presentations you tend to get the slides shown without any notes about what was said.
- Climateprediction.net (CPDN) held an open day on 30 July 2004. You can see details of talks and videos here.