About

climate prediction.net logo colour

The climateprediction.net was the largest climate modelling experiment ever undertaken and built upon hundreds of millions of dollars of international research effort. It used a state-of-the-art simulation of the climate system, which ran on thousands of users’ PCs. Project participants were able to download and run different parameterisations of a climate simulation program, thus providing data for studying climate change.

Challenge for KMi:
KMi’s role in the project was to develop a robust semantic web portal for the community of participants. A mixture of technologies were used to support rich interactive facilities (incl. discussion forums, news service, conferencing and instant messaging). The portal used state-of-the-art semantic web technology (developed in KMi) to provide fully customisable ‘semantic filters’ that could be placed over any web-based document (whether local or remote), and provided the user with the context of identified ‘concepts-of-interests’.

Overview

The partners won a £400,000 grant from the Natural Environment Research Council’s e-Science initiative and a further £350,000 from Oxford e-Science Centre to test and improve the leading climate prediction model, using hundreds of thousands of personal computers.

With climate, there were many variables and parameters involved; thus, it was necessary to run the models over and over again, slightly varying the starting conditions each time and observing the spread of results at the end. The models simulated the changes occurring in the Earth’s atmosphere and oceans every few hours, and because so much data was involved, they were usually run on supercomputers, consuming significant computer time and financial resources. By porting the most advanced one, the Hadley Centre model, onto a PC platform, climateprediction.net aimed to harness the idle capacity of numerous PCs around the world.

On average, it could take up to six months to run one such model. The results were sent back, and an enormous database with petabytes—a billion megabytes—of data was generated by the aggregated model runs. This database was then ‘mined’ to search for shortcomings and anomalies in the model. From the climatology research perspective, it was about improving the model and making climate predictions more reliable. However, there were also enormous opportunities in the area of advanced web and communication technologies.

Objectives

The project aimed to provide participants with various ‘non-invasive’ methods for understanding and learning about climate science in cooperation with other members of a large community. Particular attention was focused on supporting the interpretation, sharing, and comparative analysis of data generated from the large-scale distributed experiment.

Categories of Objectives

In order to achieve the main aim and make the specific topic of the climateprediction.net project more accessible to the general public, schools, universities, and various special interest groups, we tackled the following categories of objectives:

  • Making sense of a specific climate model or a model accessible from the web, including its partial results and visualisations;
  • Interaction with and accessibility to the rich domain resources (scientific papers, news, etc.);
    Enabling members of the community to present and publish partial results and ‘project proposals’ (‘forensic evidence gathering’);
  • Encouraging interactive participation in community events (discussions, webcasts, ‘popular science’ newsletters, etc.).

Semantic tools

The Semantic Web

The success of the internet radically transformed the way people worked, studied, shopped, and communicated. But the next-generation web, which was being developed at the time, had the potential to perform many more tasks for users, thanks to intelligent software capable of interpreting meaning to understand their needs better. For instance, future search engines were expected to go beyond merely looking for keywords and picking out all the websites that contained them; instead, they would be able to interpret the meaning of the questions they were given, using a new generation of ‘semantic mark-up’ language that was being developed by researchers across the world. More significantly, this also made possible the development of ‘smart agents’ that could collaborate with one another on behalf of users.

Magpie Semantic Engine

Magpie was an experimental Semantic Web ‘filter’ developed at The Open University’s Knowledge Media Institute. It worked as a streamlined toolbar that sat within a web browser and helped users find and further elaborate on topics of interest. In the context of climateprediction.net, it was a fair bet that users had some interest in climate prediction, but Magpie had broader applicability as well. Magpie automatically highlighted key items of interest within any web page visited, and for each highlighted term, it provided a set of ‘services’ (e.g., explanations, examples, further links) when users right-clicked on the item.

People

Martin Dzbor
Enrico Motta
John Domingue
Marc Eisenstadt
Elaine McPherson

Project partners

University of Oxford
The Open University
University of Reading
The Met Office
UGAMP
The British Atmospheric Data Centre
Tessella
Research Systems
National Environmental Research Council

Publications

Semantic Layering with Magpie
(John Domingue, Martin Dzbor & Enrico Motta) [Download PDF]


Short project overview for OU Council and research prospectus
(Bob Spicer & Martin Dzbor) [Download PDF]


Draft scenario and vision of the project
(Martin Dzbor) [Download PDF]