FAQ

Sentinel-3 OLCI Technical Guide - FAQ

OGVI questions


FAQ-OLCI-001 - How is the OGVI (OLCI's level 2 product) obtained, I understand that O3, O10 and 017 bands are used, but specifically how? I need to know the algorithm.

The OGVI actually is Fraction of Absorbed Photosynthetically Active Radiation (FAPAR). For a more detailed description see the OLCI Global Vegetation Index page and Level 2 FAPAR ATBD


FAQ-OLCI-002 - I have been using EVI (Enhanced Vegetation Index) from MODIS and I would like to use OGVI instead of EVI, is this possible?

EVI and OGVI are both correlated with vegetation status, however, they are differently defined quantities. You should also note that OLCI has different band definitions compared to MODIS. In order to decide if OGVI would be suitable for your work, we recommend that you check the ATBD for a detailed description of the algorithm.


FAQ-OLCI-003 - Is atmospheric correction applied in the OGVI product?

FAPAR is generated based on red and NIR rectified reflectances. They are virtual reflectance largely decontaminated from atmospheric and angular effects, and good proxy to top of canopy reflectances. For details see the ATBD.


FAQ-OLCI-004 - Is there a tool (similar to Sen2Cor) that can be used for Sentinel-3 atmospheric correction using the TOA data?

Yes, iCor, developed by VITO, is available as a SNAP plug-in. The SYNERGY Level 2 product, based on some bands of OLCI and SLSTR, contains surface reflectance projected on a common grid.



WGSF questions


FAQ-OLCI-005 - How are the masks WQSF_lsk_CLOUD and WQSF_lsk_SNOW_ICE created (product S3A_OL_2_WFR). Where can I find some information about the processing chains as well as the algorithms used?

The WQSF_lsb_CLOUD and WQSF_lsb_SNOW_ICE are set within a first step of pixels classification. This step gathers the results of "classical" brightness and whiteness tests together with a cloudiness probability index computed via an artificial neural network (ANN) trained on manually classified data and using all the OLCI channels. The ANN output index is used to split pixels into several classes among which two are directly linked to the flags you are interested in: the Snow/ice class and the Cloud sure class. Other classes include several ambiguous cases (semi- transparent cloud, turbid atmosphere and spatially mixed pixels) as well as the clear sky cases without snow or ice.

The combinations of test results and ANN outputs is rather complex and depends in particular on the underlying surface. Regarding the WQSF_lsb_CLOUD, one can mention that the ANN output is consolidated by brightness and whiteness checks. Regarding WQSF_lsb_SNOW_ICE, the ANN output over water is consolidated against a sea ice climatology, i.e. it is not raised where and when sea ice is not likely to happen; and over land it is completed with a Snow Differential Index (using the spectral slope in the NIR, different for snow and clouds).


FAQ-OLCI-006 - I have questions about OLCI radiometric calibration:

  1. What are the current calibration coefficients for OLCI?
  2. Which time period of a product (i.e the specific time range) is subject to radiation calibration offset? And can these offsets be corrected by provided coefficients?

The relevant official information is available in the Sentinel-3/OLCI Product Notices.

  1. Calibration coefficients for OLCI is a complex dataset: there is one coefficient for every instrument spatial pixel and every channel, as OLCI is an imaging spectrometer. In addition, there is a long-term trending correction term, also parameterized for each spatial pixel and channel.
  2. The whole OLCI dataset is affected by the radiometric bias, whatever the acquisition date. There are no correction coefficients available, as the source of the discrepancy is not fully understood at the present time, and the various methods that were used to assess the biases do not converge enough to provide reliable correction terms with the appropriate accuracy.


FAQ-OLCI-007 - I have read some ATBD documents on the website, however, I don't understand what criterion is used to distinguish between using BPAC (Bright Pixel Atmospheric Correction) and CWAC (Clean Water Atmospheric Correction) algorithms for different water pixels (i.e. turbid or clean water).

Also, I have seen the flag of the OLCI L2 product is "WQSE_lsb.BPAC_ON". Does this mean that the pixel whose "BPAC_ON" flag is "True" was corrected by the BPAC algorithm?

The BPC (new name for BPAC, see below) is applied to all water pixels, upstream the CWAC. Its purpose is not an Atmosphere Correction but rather to provide an estimate of the water leaving reflectance in the NIR, as seen from Top Of Atmosphere (TOA), so that it can be subtracted from the TOA spectrum to allow the CWAC to properly identify the aerosol type and load. In the case of clear open ocean waters, it returns the very weak signal of pure water.

The BPAC_ON flag reports on the convergence of the BPC iterative algorithm. A weakness in the current version is the cause of a high number of non-convergence, for which the flag if set to FALSE. A corrected version is under validation, with which most of the water pixels are flagged BPAC_ON, except when turbidity is above the scope of the algorithm.


FAQ-OLCI-008 - BPAC and CWAC are respectively the standard OLCI algorithms for turbid and clean waters, as far as I know. Their correction results could be found in OLCI L2 products since 5 July 2017. However, when I want to perform BPAC for OLCI images before that date, I cannot find related plugins. What can I do if I want to perform BPAC for L1 product before 5 July 2017. Or for the L1 products before that date that cannot be processed by BPAC?

As mentioned in FAQ-OLCI-007, BPC is not an atmosphere correction by itself, it is only the way to allow CWAC (the Atmosphere Correction) to work over turbid waters that are significantly bright at the wavelengths at which CWAC assess the atmosphere signal (779 and 865 nm).

Note that the Case 2 waters specific products (TSM_NN, CHL_NN and ADG_NN) are not derived from the water-leaving reflectances output by the BPC+CWAC chain – and provided in the OLCI Level 1 product – but from an independent TOA to water properties chain.

BPC and CWAC are implemented in the operational processors, only operated in the Sentinel-3 ground segment. There is no way currently for a user to (re)process Level 1 products.


FAQ-OLCI-009 - Regarding the processing version in the metadata, I do not know the definition of the PUG and IPF versions. Moreover, will the difference between the different "processing versions" be large? If I want to do research with OLCI images, do I need to follow the latest processing version of the product (i.e., I have to re-process the images when the new version is released)?

First, the PUG and the IPF are two different processors, which have different functions. The IPF implements algorithms, at different levels (L0 (not delivered to the users), L1 and L2) and there is one IPF per sensor and per level (OL_1, SL_1, OL_2, etc.) The PUG is a post-processor which does not modify the physical content of the product. It generates what we call PDUs (Product Dissemination Unit). The PUG just aims at having products where the spatial footprint is the same from one orbit to the other.

Yes, potentially, the differences between two processing baselines (PB) may be important, depending on the number and importance of bugs corrected and the evolutions (if any) implemented in the new version. This is detailed in each Product Notice issued when a new Processing Baseline is operationally deployed.

With OLCI (and with any other instrument), each processing baseline has been used during a certain period of time. You can find information in the Processing Baseline section of each instrument in the Technical Guide.

Please note that you cannot reprocess the data yourself, but it is in the agency's plan to reprocess the full datasets (since the beginning of the mission) with the latest PB in order to have a consistent time-series.


FAQ-OLCI-010 - We have investigated several OLCI OL_1_EFR files where the global attribute 'resolution' is the same in the tie-point files and in the measurement files, even though the number of columns differs by a factor of 64 as expected from the xfdumanifest file and the Sentinel-3 User Handbook.

The resolution attribute applies to the whole product, not to a particular file. Hence the values are constant across all the files and in agreement with those of the manifest (in samplingParameters). In addition, the applicable sub-sampling factors are provided as global attributes in all netCDF files, allowing to link the two grids, including the resolution information.

It should be noted that the Tie Points grid is actually the same grid as the pixels one, but sub-sampled. In other words, to each Tie Point correspond a pixel.


FAQ-OLCI-011 - The application of the resolution attribute in the OLCI product differs from the SLSTR product, where the tie point files have a different resolution [16000 16000] than the measurement files [500 500].

Do we have to apply the resolution attribute to the whole product for OLCI, but to the single files for SLSTR?

The OLCI grids were defined and intended to be used as follows:

  1. Tie Points files contain annotation data (acquisition geometry, meteo etc.), appended at Level 1, that are intended to be used by further processing, in particular the Level 2 processor. They are provided at a subsampled version of the product image grid, for storage and processing optimisation reasons, and because subsampled resolution is compatible with the nature of the annotations data. In that sense, the resolution of the enclosed data is the resolution times the relevant subsampling factor found in the samplingParameters metadata.
  2. Tie points data are still meant to be used at image pixel resolution, after interpolation to the image grid. In that sense, the resolution of the information borne by the tie points annotation is the same as that of the product.

To summarise, the following rule may apply to derive a given OLCI L1/L2 product netCDF file content resolution: if the file starts with tie_, then multiply the resolution provided in the file metadata by the samplingParameters, if not use as provided.

For SLSTR the various grids are not always linked by a regular sub-sampling rule, and the instrument measurements are not all on the same grid or at the same resolution, hence the various resolutions.


FAQ-OLCI-012 - I want to detect harmful algal blooms with Sentinel-3. Upon examining the user handbook and other documents, I could not locate this information. However, I did find the following information on your website, and it is listed below:

"Algal bloom detection has been the subject of a number of intensive research works during the last decade. The results of these works have been transformed into an operational capacity to trigger alerts for some invasive micro-algae. Within the framework of the GMES Service Element (e.g. Coastwatch and Marcoast) operational services have been set up and are still operating. The next scientific challenge is, whenever possible, to identify the type of algae and the harmfulness of the detected species together with the bloom strength and extent."

In reading the above text and the information that I have found, it seems that the satellite only identifies an algal bloom and not a "harmful" algal bloom. Does this mean that the satellite identifies the chlorophyll, temperature, and the colour of the water in determining a bloom? I am trying to verify this information so I can have a better understand of the technology available and when a "harmful" bloom may occur.

Yes, the harmfulness of an algal bloom cannot be detected directly by satellite. However, some proxies such as the location, chlorophyll content and temperature can be used to infer a risk index, but these are still prospective works.

On-board Sentinel-3, the OLCI instrument provides ocean colour information as well as chlorophyll content. The SLSTR instrument provides SST information (Sea Surface Temperature).


FAQ-OLCI-013 - Sentinel-3 OLCI/SLSTR level-1 files have absolute/relative orbit numbers as well as absolute/relative pass numbers. And from two files I checked, the (abs/rel) pass number is twice the (abs/rel) orbit number. I tried to get information on the pass number from the Sentinel-3 handbook and from the level 1 ESA documents, but did not find any.

As far as I could see this field is not given for Sentinel-1 and Sentinel-2 data.

Could you clarify what the pass number is and/or what it is used for? Or a link/document where this is described?

The pass actually corresponds to half an orbit and there are 770 passes per cycle (composed of 385 orbits). But if one orbit is from equator to equator, a pass is from pole to pole. The pass is actually more used in altimetry, though the information is included in manifest of OLCI or SLSLTR products. There is no specific documentation.


FAQ-OLCI-014 - Could you tell what the reflectance band following the S3A_OL_2_LFR_ product?

The S3A_OL_2_LFR product does not provide surface reflectances. The user may be interested on that matter by the S3A_SY_2_SYN product that contain surface reflectances over land surfaces.


FAQ-OLCI-015 - I am using Sentinel-3 OLCI data for chlorophyll mapping of submesoscale eddies, and I am in great need of detailed information on the Chlorophyll Neural Network and Chlorophyll OC4ME algorithms. All I have found on the website is the technical guide.

  1. Is the technical guide available in PDF format?
  2. Is there any literature concerning the development and validation of the two chlorophyll algorithms available with the OLCI data?

Please find below the ATBD describing the two algorithms:


FAQ-OLCI-016 - I'm using Satpy to process satellite data. When resampling to a given grid we use an AreaDefinition, and I'm curious if there is a way of setting up a grid specification based on the grid that you use when resampling OLCI data. According to the Product Grid page there is a resampling procedure performed on OLCI Level-1 data.

Is it possible to define a OLCI standard-grid with some of the following parameters?

  • area_id (str) – Identifier for the area
  • description (str) – Human-readable description of the area
  • proj_id (str) – ID of projection
  • projection (dict or str) – Dictionary or string of Proj.4 parameters
  • width (int) – x dimension in number of pixels, aka number of grid columns
  • height (int) – y dimension in number of pixels, aka number of grid rows
  • rotation (float) – rotation in degrees (negative is cw)
  • area_extent (list) – Area extent as a list (lower_left_x, lower_left_y, upper_right_x, upper_right_y)
  • nprocs (int, optional) – Number of processor cores to be used for certain calculations

It is not possible to define such parameters as the OLCI re-sampling process is not a geographical projection but a re-gridding from the actual instrument grid toward an "ideal instrument" grid, spatially continuous and regular. However, it remains a satellite-related grid.

Please see the Spatial Sampling page for more information.


FAQ-OLCI-017 - What is the spectral response function (OLCI/Sentinel-3) in order to model the collected in-situ data (radiometric data)?

On the Spectral Response Function Data page I saw that there are netCDF files which have the spectral function parameters, but I would like to know how to apply these to my in-situ data set (above water measurements collected with portable radiometer with 1 nm resolution).

The SRFs (spectral response functions) describe how the OLCI instrument responds to a given wavelength within each channel bandwidth. They are provided as response (wavelengths) over a wavelength range specific to each channel with a given spectral step. Both response and wavelengths are provided in the files. To apply them to your in-situ spectral data you need to first interpolate your data onto the same spectral grid, then to multiply the SRF by your radiance data element by element, sum-up the resulting products and finally to normalise the sum of the products by the sum of the SRF.


FAQ-OLCI-018 - We are ingesting metadata for the Sentinel-3 OLCI instrument, and are currently using the product type OL_1_EFR_. I have a query in relation to the cloud cover attribute. As I understand it, this is not available via the API for this product type although it does seem to be available for L2 LFR__ and WFR___ products. Our goal is to use the true colour bands from the OLCI instrument (currently using OL_1_EFR) - is this not possible using the L2 products? If we need to continue to use the OL_1_EFR product, is there a derived L2 product, and if so, it is possible to use the cloud cover percentage from that?

On the use of true colours: OL_1__EFR is the only product providing radiance at every channel over every type of underlying surface. The OL_2_LFR does not contain spectral information but the OL_2_WFR does (water-leaving reflectance) but limited to clear-sky water pixels. So we recommend that you use use OL_1_EFR.

On the availability of a cloud cover attribute: indeed there is no advance cloud screening in the Level 1 processing, this is done in the Level 2 processors. And yes, for every OL_1_EFR product there shall be a corresponding OL_2_LFR and OL_2_WFR product with same geographical coverage from which you can extract the cloud cover information. On the other hand, there is a basic cloud screening within the Level 1 processing, only focussed at bright clouds and with some commission errors over very bright soils: the so-called Bright Pixels flag. That one is reported in the metadata as a proxy to a cloud coverage and may fulfil your needs, if limited to scene selection.


FAQ-OLCI-019 - I have observed that occasionally there are neighbouring pixels that have exactly the same values in both "longitude" and "latitude" bands. The resolution of 30 0m takes about 0.0027 degree, which means that the Int32 resolution with the scale of 1e-6 should be more than sufficient. The situation that two neighbouring pixels have the same coordinates was quite common and sometimes even three such pixels in a row occurred.

I checked the data in the SNAP GUI. The problem is that the Long/Lat values in "Position" do not correspond with lon/lat values in "Bands", nor do they correspond with those TP_lon/TP_lat values.

I would like to know:

  1. how should I handle those identical pixels
  2. how exactly does SNAP determine the pixel position
  1. the presence of identical pixels along a give image row is normal: it comes from the fact that the image grid has a constant spatial resolution in the across-track dimension while that of the instrument degrades with increasing viewing angle (i.e. going away from Nadir). As a matter of fact, these duplicated pixels are flagged as such by the quality_flags.duplicated.
  2. position in SNAP: in your example the pixel position displayed under the "position" pane is consistent with that under the Tie-Point Grids pane, providing you convert coordinates into decimal degrees before comparing you get differences well below the arcsecond accuracy of the Position Lon/Lat display. Note that displaying Tie-points values in pixel position is a selectable option in SNAP: in the Tools/Options menu, the S3TBX tab allows you to select per-pixel geocoding for OLCI. If you do so, then the Pixel position lon/lat will match those of the longitude and latitude bands.

Menu Display