Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2017 Oct 23;17(10):2421.
doi: 10.3390/s17102421.

A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features

Affiliations

A Hyperspectral Image Classification Framework with Spatial Pixel Pair Features

Lingyan Ran et al. Sensors (Basel). .

Abstract

During recent years, convolutional neural network (CNN)-based methods have been widely applied to hyperspectral image (HSI) classification by mostly mining the spectral variabilities. However, the spatial consistency in HSI is rarely discussed except as an extra convolutional channel. Very recently, the development of pixel pair features (PPF) for HSI classification offers a new way of incorporating spatial information. In this paper, we first propose an improved PPF-style feature, the spatial pixel pair feature (SPPF), that better exploits both the spatial/contextual information and spectral information. On top of the new SPPF, we further propose a flexible multi-stream CNN-based classification framework that is compatible with multiple in-stream sub-network designs. The proposed SPPF is different from the original PPF in its paring pixel selection strategy: only pixels immediately adjacent to the central one are eligible, therefore imposing stronger spatial regularization. Additionally, with off-the-shelf classification sub-network designs, the proposed multi-stream, late-fusion CNN-based framework outperforms competing ones without requiring extensive network configuration tuning. Experimental results on three publicly available datasets demonstrate the performance of the proposed SPPF-based HSI classification framework.

Keywords: convolutional neural networks; hyperspectral image classification; spatial pixel pair features.

PubMed Disclaimer

Conflict of interest statement

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

Figures

Figure 1
Figure 1
Illustrations of popular features used in HSI classification. Early CNN-based HSI classification methods are either based on raw spectral pixel feature [22] in (a) or spectral pixel patch feature [37] in (b). As shown in (c), [32] adopts a random sampling scheme across the entire training set to construct a large number of labeled pixel pair features (PPF) pairs. The proposed spatial pixel pair feature (SPPF) feature chooses a tight eight-neighbor as N(x0), from which SPPF pairs are built, such as x0,x1, x0,x2, etc. (a) Raw spectral pixel feature; (b) spectral pixel patch feature; (c) PPF feature; (d) proposed SPPF feature.
Figure 2
Figure 2
Proposed classification framework for HSI classification based on SPPF features.
Figure 3
Figure 3
Stability comparison results on three datasets with increasing number of training samples. Generally, more training samples result in models having better performance.
Figure 4
Figure 4
Batch size influence on training the SPPF framework (CNN2-lite) with different datasets.
Figure 5
Figure 5
Results on the University of Pines dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Figure 6
Figure 6
Results on the University of Pavia dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)
Figure 7
Figure 7
Results on the University of Salinas dataset. (a) Pseudo color image of the scene. (b–h) Results from competing methods. (i) The ground truth. Our algorithm achieves the best classification accuracy among the competing methods. (Best viewed in color.)

Similar articles

Cited by

References

    1. Hughes G.P. On the mean accuracy of statistical pattern recognizers. IEEE Trans. Inf. Theory. 1968;14:55–63. doi: 10.1109/TIT.1968.1054102. - DOI
    1. Fauvel M., Tarabalka Y., Benediktsson J.A., Chanussot J., Tilton J.C. Advances in spectral-spatial classification of hyperspectral images. Proc. IEEE. 2013;101:652–675. doi: 10.1109/JPROC.2012.2197589. - DOI
    1. Ablin R., Sulochana C.H. A survey of hyperspectral image classification in remote sensing. Int. J. Adv. Res. Comput. Commun. Eng. 2013;2:2986–3000.
    1. Roweis S.T., Saul L.K. Nonlinear dimensionality reduction by locally linear embedding. Science. 2000;290:2323–2326. doi: 10.1126/science.290.5500.2323. - DOI - PubMed
    1. Fan J., Chen T., Lu S. Superpixel Guided Deep-Sparse-Representation Learning For Hyperspectral Image Classification. IEEE Trans. Circuits Video Technol. 2017 doi: 10.1109/TCSVT.2017.2746684. - DOI

LinkOut - more resources