- Activities
- PV-CDRR
- About
About
Overview
Motivation
The need for an improved cloud detection method for Proba-V products was raised during the first Proba-V Quality Working Group Meeting, held in ESTEC on March 2015. This need was also underlined during the Proba-V Symposium, held in Ghent on January 2016, as being one of the major issues to be addressed for improving data quality. Several presenters at the Symposium reported under-detection of clouds with the current algorithm with semi-transparent clouds representing a major concern for land cover applications as well as for surface properties retrieval.
In order to answer this need, ESA in collaboration with BELSPO decided to organise a Round Robin exercise on Proba-V Cloud Detection. The Round Robin is open to any interested user with the goal to providing recommendations for the definition of the future operational processor baseline.
Objectives
The objectives of the Round Robin exercise are:
- To inter-compare different cloud screening methodologies for Proba-V and learn about the advantage and drawbacks for various clouds and surface conditions.
- To provide final recommendations to ESA on the potential best candidate for implementation in the operational processing chain
- To review and consolidate users' requirements within the Proba-V community on clouds clearing and decide on trade-off between under-detection of clouds and clear pixels' availability.
- To collect lessons learnt on cloud detection in the VNIR and SWIR domain for land and coastal water remote sensing and reuse them in the frame of Sentinel-2 and Sentinel-3 cloud detection.
- To increase awareness on Proba-V mission by inviting new scientific teams in the Round Robin exercise and organising a final workshop on project results.
Definitions
Term | Definition |
---|---|
Validation Dataset | The Validation Dataset is the "truth" data, which we are going to use to assess the quality of the different cloud detection algorithms. This dataset consists of a reletively large ensemble of pixels (several ten thousands) manually classified from visual inspection of the reference images. This dataset needs to be global and representative of different environmental conditions (clouds and surface types, different seasons). The validation dataset will be kept in a "vault" and used only at the end of the Round Robin exercise to perform the final quality assessment. |
Test Dataset | The Test Dataset is a representative sample of the validation dataset, which is provided to the Round Robin participants as an indicator of our pixel classification criteria and definition. This dataset is a subset, randomly extracted from the validation dataset, in which all the relevant pixel classes are adequately represented. |
Training Dataset | The Training Dataset is a statistically significant ensemble of pixels, which is used by the algorithm's providers to train and "calibrate" their methods. Ideally the training dataset must be much bigger than the validation dataset in order to include a sufficient ensemble of cases for training its prediction capability. The selection of a suitable training dataset will be the responsibility of the algorithm's providers. |
Round Robin Setup
A summary table providing the main requirements and settings for the Round Robin exercise:
Requirements | Settings |
---|---|
General Round Robin Settings |
|
Algorithm Requirements |
|
Input Reference Scenes |
|
Validation Dataset |
|
Test Dataset |
|
Quality Assessment Metrics |
|
Schedule
The Proba-V Cloud Detection Round Robin will be performed according to this plan.
Activity | Deadline |
---|---|
Registration to the Round Robin | 13 April 2016 |
Delivery of Round Robin Protocols | 30 May 2016 |
Delivery of Round Robin input data (reference images, Test Dataset) | 15 June 2016 |
Preparation of the Validation and Test Dataset | 30 June 2016 |
Delivery of Round Robin output data (cloud masks, ATBD) | 1 November 2016 |
Quality Assessment Report | 15 January 2017 |
Final Workshop in ESRIN | 01 March 2017 |
Additional Information
A fixed price reward is allocated for each algorithm provider. The reward will be granted upon provision of the following deliverables:
- Cloud masks for the reference images (four full days of Proba-V global data)
- High level ATBD of the used algorithm
- Small Technical Note on the Computing Resource for the algorithm
The Quality Assessment will be performed by Brockmann Consult on the Validation Dataset and a final report will be prepared and reviewed among the Round Robin participants.
Results will be further discussed during a dedicated one day workshop in ESRIN.
Final peer-reviewed paper will be prepared summarising the results of the inter-comparison. Co-authorship of the paper will be granted to the Round Robin participants.