Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning

Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a gr...

Descripción completa

Guardado en:
Detalles Bibliográficos
Autores principales: Stella Ofori-Ampofo, Charlotte Pelletier, Stefan Lang
Formato: article
Lenguaje:EN
Publicado: MDPI AG 2021
Materias:
Q
Acceso en línea:https://doaj.org/article/b24982d8450a4ddfb4df3768aae07893
Etiquetas: Agregar Etiqueta
Sin Etiquetas, Sea el primero en etiquetar este registro!
id oai:doaj.org-article:b24982d8450a4ddfb4df3768aae07893
record_format dspace
spelling oai:doaj.org-article:b24982d8450a4ddfb4df3768aae078932021-11-25T18:55:15ZCrop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning10.3390/rs132246682072-4292https://doaj.org/article/b24982d8450a4ddfb4df3768aae078932021-11-01T00:00:00Zhttps://www.mdpi.com/2072-4292/13/22/4668https://doaj.org/toc/2072-4292Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.Stella Ofori-AmpofoCharlotte PelletierStefan LangMDPI AGarticlefusionsatellite image time seriesSentinel-1Sentinel-2pixel-set encodertemporal attention encoderScienceQENRemote Sensing, Vol 13, Iss 4668, p 4668 (2021)
institution DOAJ
collection DOAJ
language EN
topic fusion
satellite image time series
Sentinel-1
Sentinel-2
pixel-set encoder
temporal attention encoder
Science
Q
spellingShingle fusion
satellite image time series
Sentinel-1
Sentinel-2
pixel-set encoder
temporal attention encoder
Science
Q
Stella Ofori-Ampofo
Charlotte Pelletier
Stefan Lang
Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
description Crop maps are key inputs for crop inventory production and yield estimation and can inform the implementation of effective farm management practices. Producing these maps at detailed scales requires exhaustive field surveys that can be laborious, time-consuming, and expensive to replicate. With a growing archive of remote sensing data, there are enormous opportunities to exploit dense satellite image time series (SITS), temporal sequences of images over the same area. Generally, crop type mapping relies on single-sensor inputs and is solved with the help of traditional learning algorithms such as random forests or support vector machines. Nowadays, deep learning techniques have brought significant improvements by leveraging information in both spatial and temporal dimensions, which are relevant in crop studies. The concurrent availability of Sentinel-1 (synthetic aperture radar) and Sentinel-2 (optical) data offers a great opportunity to utilize them jointly; however, optimizing their synergy has been understudied with deep learning techniques. In this work, we analyze and compare three fusion strategies (input, layer, and decision levels) to identify the best strategy that optimizes optical-radar classification performance. They are applied to a recent architecture, notably, the pixel-set encoder–temporal attention encoder (PSE-TAE) developed specifically for object-based classification of SITS and based on self-attention mechanisms. Experiments are carried out in Brittany, in the northwest of France, with Sentinel-1 and Sentinel-2 time series. Input and layer-level fusion competitively achieved the best overall F-score surpassing decision-level fusion by 2%. On a per-class basis, decision-level fusion increased the accuracy of dominant classes, whereas layer-level fusion improves up to 13% for minority classes. Against single-sensor baseline, multi-sensor fusion strategies identified crop types more accurately: for example, input-level outperformed Sentinel-2 and Sentinel-1 by 3% and 9% in F-score, respectively. We have also conducted experiments that showed the importance of fusion for early time series classification and under high cloud cover condition.
format article
author Stella Ofori-Ampofo
Charlotte Pelletier
Stefan Lang
author_facet Stella Ofori-Ampofo
Charlotte Pelletier
Stefan Lang
author_sort Stella Ofori-Ampofo
title Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
title_short Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
title_full Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
title_fullStr Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
title_full_unstemmed Crop Type Mapping from Optical and Radar Time Series Using Attention-Based Deep Learning
title_sort crop type mapping from optical and radar time series using attention-based deep learning
publisher MDPI AG
publishDate 2021
url https://doaj.org/article/b24982d8450a4ddfb4df3768aae07893
work_keys_str_mv AT stellaoforiampofo croptypemappingfromopticalandradartimeseriesusingattentionbaseddeeplearning
AT charlottepelletier croptypemappingfromopticalandradartimeseriesusingattentionbaseddeeplearning
AT stefanlang croptypemappingfromopticalandradartimeseriesusingattentionbaseddeeplearning
_version_ 1718410547432521728