Quality analysis in N-dimensional lossy compression of multispectral remote sensing time series images

L. Pesquer*, A. Zabala, X. Pons, J. Serra

*Corresponding author for this work

Research output: Other contribution

1 Citation (Scopus)

Abstract

This work aims to determine an efficient procedure (balanced between quality and compression ratio) for compressing multispectral remote sensing time series images in a 4-dimensional domain (2 spatial, 1 spectral and 1 temporal dimension). The main factors studied were: spectral and temporal aggregation, landscape type, compression ratio, cloud cover, thermal segregation and nodata regions. In this study, the authors used three-dimensional Discrete Wavelet Transform (3d-DWT) as the compression methodology, implemented in the Kakadu software with the JPEG2000 standard. This methodology was applied to a series of 2008 Landsat-5 TM images that covered three different landscapes, and to one scene (19-06-2007) from a hyperspectral CASI sensor. The results show that 3d-DWT significantly improves the quality of the results for the hyperspectral images; for example, it obtains the same quality as independently compressed images at a double compression ratio. The differences between the two compression methodologies are smaller in the Landsat spectral analysis than in the CASI analysis, and the results are more irregular depending on the factor analyzed. The time dimensional analysis for the Landsat series images shows that 3d-DWT does not improve on band-independent compression.

Original languageAmerican English
TypeConference
DOIs
Publication statusPublished - 24 Aug 2010

Publication series

NameProceedings of SPIE - The International Society for Optical Engineering
Volume7810
ISSN (Print)0277-786X

Keywords

  • 3D lossy compression
  • Quality analysis
  • Remote sensing images

Fingerprint

Dive into the research topics of 'Quality analysis in N-dimensional lossy compression of multispectral remote sensing time series images'. Together they form a unique fingerprint.

Cite this