Cost and scalability improvements to the Karhunen-Loêve transform for remote-sensing image coding

Research output: Contribution to journalArticleResearchpeer-review

48 Citations (Scopus)


The Karhunen-Loêve transform (KLT) is widely used in hyperspectral image compression because of its high spectral decorrelation properties. However, its use entails a very high computational cost. To overcome this computational cost and to increase its scalability, in this paper, we introduce a multilevel clustering approach for the KLT. As the set of different multilevel clustering structures is very large, a two-stage process is used to carefully pick the best members for each specific situation. First, several candidate structures are generated through local search and eigenthresholding methods, and then, candidates are further screened to select the best clustering configuration. Two multilevel clustering combinations are proposed for hyperspectral image compression: one with the coding performance of the KLT but with much lower computational requirements and increased scalability and another one that outperforms a lossy wavelet transform, as spectral decorrelator, in quality, cost, and scalability. Extensive experimental validation is performed, with images from both the AVIRIS and Hyperion sets, and with JPEG2000, 3D-TCE, and CCSDS-Image Data Compression recommendation as image coders. Experiments also include classification-based results produced by k-means clustering and ReedXiaoli anomaly detection. © 2006 IEEE.
Original languageEnglish
Article number5443569
Pages (from-to)2854-2863
JournalIEEE Transactions on Geoscience and Remote Sensing
Publication statusPublished - 1 Jul 2010


  • Component scalability
  • Hyperspectral data coding
  • Karhunen-Loêve Transform (KLT)
  • Low cost
  • Progressive lossy-to-lossless (PLL) and lossy compression


Dive into the research topics of 'Cost and scalability improvements to the Karhunen-Loêve transform for remote-sensing image coding'. Together they form a unique fingerprint.

Cite this