Nowadays, the increasing amount of information provided by hyperspectral sensors requires optimal solutions to ease the subsequent analysis of the produced data. A common issue in this matter relates to the hyperspectral data representation for classification tasks. Existing approaches address the data representation problem by performing a dimensionality reduction over the original data. However, mining complementary features that reduce the redundancy from the multiple levels of hyperspectral images remains challenging. Thus, exploiting the representation power of neural networks based techniques becomes an attractive alternative in this matter. In this work, we propose a novel dimensionality reduction implementation for hyperspectral imaging based on autoencoders, ensuring the orthogonality among features to reduce the redundancy in hyperspectral data. The experiments conducted on the Pavia University, the Kennedy Space Center, and Botswana hyperspectral datasets evidence such representation power of our approach, leading to better classification performances compared to traditional hyperspectral dimensionality reduction algorithms.
|Número de páginas||6|
|Publicación||International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives|
|Estado||Publicada - 21 ago. 2020|
|Publicado de forma externa||Sí|
|Evento||2020 24th ISPRS Congress - Technical Commission III - Nice, Virtual, Francia|
Duración: 31 ago. 2020 → 2 set. 2020
Nota bibliográficaPublisher Copyright:
© 2020 International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives.