Module D - Summary of the first phase of MiKlip

The main task of the two projects FLEXFORDEC and INTEGRATION of module D is to construct the central decadal prediction and evaluation systems, and to provide these to and improve them with the help of all other MiKlip projects.

The central decadal prediction system

The building blocks of the prediction system
The prediction system is built around the Earth System Model of the Max-Planck-Institute for Meteorology (MPI-ESM) in its low (LR) and mixed (MR) resolutions and an initialisation procedure. The prediction system is used to produce ensembles of hindcasts (retrospective forecasts) and forecasts. These are produced in an identical way, only that the forecasts are initialised with the most recent point in observational datasets and that hindcasts are initialised from observations in the past. The hindcasts are used to get an idea of the performance of the system and have been the central simulations used and analysed by the whole project.

The different generations of the central prediction system

During the course of MiKlip three generations of the prediction system have been produced; baseline 0, baseline 1 and prototype. To move from one generation to the next suggestions on improvement were collected from all MiKlip modules; a certain number of these suggestions were tested in a suite of test experiments before the final choice was made on what to change in the system compared to the generation before. The systems differ in the initialisation procedure and in the number of ensemble members, all other things remaining the same, see table 1.

The MiKlip Server

The prediction system creates vast amounts of data; the whole set of hindcasts for one generation of the system amount to 6000 simulated years or more. To facilitate the use of the data, the hindcasts were standardised according to the CMOR-standard, as it was used for CMIP5. Upon request of the project partners the CMIP5 variable list was extended with some MiKlip-specific variables. The hindcast, together with further simulations and observational datasets for the evaluation of the hindcasts, are all hosted in this common standardised format on a dedicated server – the MiKlip server, accessible for all project members. The server now has around 140 registered users, of which some 50 are actively using it. The MiKlip server is also home to the central evaluation system (CES), allowing the CES and its users easy access to all hindcasts and observational data.

Module D Summary - Table with generations of the prediction system
Table 1. The experimental set-up for the three generations of the MiKlip decadal prediction system

The central evaluation system 

In order to make any statement about the performance of the system it needs to be evaluated. This evaluation is done by comparing the hindcasts to observational data and to other potential prediction systems, such as the uninitialized simulations of the CMIP5 project (historical simulations), climatology or persistence. This is the main task of module E, but evaluation is also done in the other modules. The different ways of evaluating the model, both the different tools and datasets, are synthesised into the central evaluation system (CES) by the INTEGRATION project.

The evaluation system can be accessed either via the shell or via a web-interface, allowing for a very flexible user-oriented utilisation of the system. At the core of the evaluation system is the plugin MurCSS (Illing et. al, 2014) - Murphy-Epstein decomposition of the MSESS and the CRPSS, a tool which uses and extends the verification framework for decadal predictions suggested by Goddard et al. (2013), and which is the recommended first step to take when analyzing the hindcasts. The evaluation system now contains some further 20 plug-ins, ranging from pure plotting routines to tools for further statistical analysis. All these tools allow for a standardised comparison between the generations of the decadal prediction system, and also between these and other alternative hindcasts proposed within the project or from other modelling institutions.

Module D Summary - Fig. 1
Figure 1. (INTEGRATION) The central evaluation system(CES) and its three tier software model depicting the basic components and the interfaces to standardized data and tools.

Catering for project partners’ needs

Thanks to the two interfaces for the CES users have easy access to both data and tools. To highlight some features of the CES, the “Data Browser” in the web interface allows for an easy search of all available datasets and variables, and the history function embedded in all the tools allows the user to revisit and retrace all analyses he or she has performed, enabling an easy way to adjust the settings for further analyses. Users also have the possibility to analyse their own data via the functionality “project data”. In this way their data is added to the search routines of the CES and can be analysed and compared to the central hindcasts and other simulations. To get an impression of what the evaluation system offers, visit the web-interface under, click the “Guest?” button and you will be given a tour of the system!

Module D Summary - Fig. 2
Figure 2.(INTEGRATION/FLEXFORDEC) Usage of tools on the MiKlip server.

Some results from the past few years

Both projects in Module D have been analysing the hindcasts of the different generations resulting in quite a number of publications, both individually and in cooperation with other MiKlip projects, in particular with DroughtClip, AODA-PENG, MODINI, EnsDiVal and ALARM.

Module D Summary - Fig. 3
Figure 3. (INTEGRATION) Differences of anomaly correlation skill. (a and b) b1-LR minus b0-LR. (c and d) b1-MR minus b1-LR. Year 1 in Figures 2a and 2c. Years 2–5 in Figures 2b and 2d. Crosses denote differences exceeding the 5–95% confidence level. Reproduced from Figure 2., Pohlmann et al. (2013) – Can be accessed here:

Baseline 0

The hindcasts were compared with uninitialized experiments which consider aerosol and greenhouse gas concentrations for the period 1850-2005 and the RCP4.5 scenario thereafter. Müller et al. (2012) analysed the predictive skill of the Baseline-0 system. They show that the initialization of MPI-ESM-LR improves forecast skill with respect to the uninitialized experiment predominantly over the North Atlantic for all lead times. Moreover, based on the recognition that the variability of key parameters, such as surface air temperature (SAT), sea level pressure and its underlying processes may change considerably in space and time as a function of season, Müller et al. (2012) identified considerably varying forecast skill of e.g. SAT for the different seasons. For multi-year winter means, positive skill scores are predominantly located in northern Europe. For multi-year spring to autumn means, the positive skill scores appear in central and south-eastern Europe. However, negative skill scores over the tropical Pacific reflect a systematic error in the initialization in Baseline-0. As a consequence, the overall skill, for example, in terms of global mean temperature, is lower than that in other systems (Bellucci et al. 2015).

Baseline 1

The results show that the new oceanic initialization considerably improves the performance in terms of surface air temperature over the tropical oceans on the first year and 2–5 years time scale (Fig. 1, Pohlmann et al. 2013). This also helps to improve the predictive skill of global mean surface air temperature on this time scale. The higher model resolution improves the predictive skill of surface air temperature over the tropical Pacific even further. Moreover, due to the higher vertical resolution of ECHAM6 the climate variability in the stratosphere, such as the quasi-biennial oscillation (QBO), is improved in MPI-ESM-MR. Through the newly introduced atmospheric initialization, it is shown that the QBO exhibits predictive skill of up to 4 years when a sufficiently high vertical atmospheric resolution is used (Pohlmann et al. 2013, Scaife et al. 2014). Another advantage of the Baseline-1 system becomes apparent when analysing the Atlantic meridional overturning circulation which has a higher predictive skill in the Baseline-1 system than in the Baseline-0 system (Müller et al. 2015). The two projects worked together on a recent publication (Kadow et al. 2015), where MurCSS was used to investigate the skill and ensemble spread of surface temperature and precipitation in the Baseline 1 LR system, and the forecasts thereof started beginning of 2014. They show that the ensemble spread is a good measure for estimating the forecast uncertainty in many regions, and combining this measure with the skill measure it is possible to find regions for which one can have some confidence in the forecast.


For the prototype system the ensemble size was increased to a total of 30 members. In a common publication between FLEXFORDEC and DroughtClip it was shown that a large ensemble size could be crucial for the detection of prediction skill (Sienz et. al., 2015).

Provision of MiKlip hindcasts and forecasts

Providing MiKlip hindcasts and forecasts Outside MiKlip, there is an active contribution to the EU-FP7 project SPECS, for which hindcasts of the Baseline-1 (LR and MR) are provided. Moreover, actual forecasts of the central prediction system are provided for a real-time multi-model forecast system lead by the UK MetOffice (Smith et al. 2013). MiKlip II and Modul D

Module D and MiKlip II

For MiKlip II Module D will consist of FLEXFORDEC, INTEGRATION and three new work packages based at the DWD and at GERICS. With this addition, increased attention will be given to the operational use of the decadal prediction and evaluation systems, and the direct involvement of potential users.


Bellucci, A., Haarsma, R., Gualdi, S., Athanasiadis, P., Caian, M., Cassou, C., Fernandez, E., Germe, A., Jungclaus, J. H., Kröger, J., Matei, D., Mueller, W. A., Pohlmann, H., Salas y Melia, D., Sanchez, E., Smith, D., Terray, L., Wyser, K., & Yang, S., 2015:. An assessment of a multi-model ensemble of decadal climate predictions. Climate Dynamics, 44, 2787-2806. doi:10.1007/s00382-014-2164-y.

Goddard, L., et al., 2013: A verification framework for interannual-to-decadal predictions experiments. Climate Dynamics, Volume 40, Issue 1-2, pp 245-272.

Illing, S., C. Kadow, O. Kunst, and U. Cubasch, 2014: MurCSS: A Tool for Standardized Evaluation of Decadal Hindcast Systems. Journal of Open Research Software, 2(1):e24, DOI:10.5334/

Kadow, C., S. Illing, O. Kunst, H.W. Rust, H. Pohlmann, W.A. Müller and U. Cubasch, 2015: Evaluation of forecasts by accuracy and spread in the MiKlip decadal climate prediction system, Meteorol. Z., doi:10.1127/metz/2015/0639.

Pohlmann, H., W. A. Müller, K. Kulkarni, M. Kameswarrao, D. Matei, F. S. E. Vamborg, C. Kadow, S. Illing, J. Marotzke, 2013: Improved forecast skill in the tropics in the new MiKlip decadal climate predictions. Geophys. Res. Lett., 40, 5798-5802. doi:10.1002/2013GL058051.

Scaife, A. A., M. Athanassiadou, M. Andrews, A. Arribas, M. Baldwin, N. Dunstone, J. Knight, C. MacLachlan, E. Manzini, W. A. Müller, H. Pohlmann, D. Smith., T. Stockdale, and A. Williams, 2014: Predictability of the Quasi-Biennial Oscillation and its Northern Winter Teleconnection on Seasonal to Decadal Timescales. Geophys. Res. Lett., 41(5), pp. 1752–1758, doi:10.1002/2013GL059160.

Sienz, F., H. Pohlmann, and W.A. Müller, 2015: Ensemble size impact on the decadal predictive skill assessement, under revision.

Smith, D. M., A. A. Scaife, G. J. Boer, M. Caian, F. J. Doblas-Reyes, V. Guemas, E. Hawkins, W. Hazeleger, L. Hermanson, C. K. Ho, M. Ishii, V. Kharin, M. Kimoto, B. Kirtman, J. Lean, D. Matei, W. A. Müller, H. Pohlmann, A. Rosati, B. Wouters, and K. Wyser (2013): Real-time multi-model decadal predictions. Clim. Dyn, 41, 2875-2888. doi:10.1007/s00382-1600-0.

Müller, V., H. Pohlmann, H. Haak, J. H. Jungclaus, D. Matei, J. Marotzke, W. A. Müller, and J. Baehr, 2014: Predictions of the Atlantic meridional overturning circulation at 26.5°N within two MPI-ESM decadal climate prediction systems. Clim. Dyn., submitted.

More on Module DNews-Icon