TY - CHAP
T1 - Lossless coding of hyperspectral images with principal polynomial analysis
AU - Amrani, N.
AU - Laparra, V.
AU - Camps-Valls, G.
AU - Serra-Sagrista, J.
AU - Malo, J.
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2014/1/28
Y1 - 2014/1/28
N2 - The transform in image coding aims to remove redundancy among data coefficients so that they can be independently coded, and to capture most of the image information in few coefficients. While the second goal ensures that discarding coefficients will not lead to large errors, the first goal ensures that simple (point-wise) coding schemes can be applied to the retained coefficients with optimal results. Principal Component Analysis (PCA) provides the best independence and data compaction for Gaussian sources. Yet, non-linear generalizations of PCA may provide better performance for more realistic non-Gaussian sources. Principal Polynomial Analysis (PPA) generalizes PCA by removing the non-linear relations among components using regression, and was analytically proved to perform better than PCA in dimensionality reduction. We explore here the suitability of reversible PPA for lossless compression of hyperspectral images. We found that reversible PPA performs worse than PCA due to the high impact of the rounding operation errors and to the amount of side information. We then propose two generalizations: Backwards PPA, where polynomial estimations are performed in reverse order, and Double-Sided PPA, where more than a single dimension is used in the predictions. Both yield better coding performance than canonical PPA and are comparable to PCA.
AB - The transform in image coding aims to remove redundancy among data coefficients so that they can be independently coded, and to capture most of the image information in few coefficients. While the second goal ensures that discarding coefficients will not lead to large errors, the first goal ensures that simple (point-wise) coding schemes can be applied to the retained coefficients with optimal results. Principal Component Analysis (PCA) provides the best independence and data compaction for Gaussian sources. Yet, non-linear generalizations of PCA may provide better performance for more realistic non-Gaussian sources. Principal Polynomial Analysis (PPA) generalizes PCA by removing the non-linear relations among components using regression, and was analytically proved to perform better than PCA in dimensionality reduction. We explore here the suitability of reversible PPA for lossless compression of hyperspectral images. We found that reversible PPA performs worse than PCA due to the high impact of the rounding operation errors and to the amount of side information. We then propose two generalizations: Backwards PPA, where polynomial estimations are performed in reverse order, and Double-Sided PPA, where more than a single dimension is used in the predictions. Both yield better coding performance than canonical PPA and are comparable to PCA.
KW - decorrelation
KW - entropy
KW - hyperspectral image coding
KW - Principal Component Analysis
KW - Principal Polynomial Analysis
UR - http://www.scopus.com/inward/record.url?scp=84949928772&partnerID=8YFLogxK
U2 - 10.1109/ICIP.2014.7025817
DO - 10.1109/ICIP.2014.7025817
M3 - Chapter
AN - SCOPUS:84949928772
T3 - 2014 IEEE International Conference on Image Processing, ICIP 2014
SP - 4023
EP - 4026
BT - 2014 IEEE International Conference on Image Processing, ICIP 2014
ER -