DOI QR코드

DOI QR Code

Performance Evaluation of Pansharpening Algorithms for WorldView-3 Satellite Imagery

  • Kim, Gu Hyeok (Dept. of Civil Engineering, Chungbuk National University) ;
  • Park, Nyung Hee (Dept. of Civil Engineering, Chungbuk National University) ;
  • Choi, Seok Keun (School of Civil Engineering, Chungbuk National University) ;
  • Choi, Jae Wan (School of Civil Engineering, Chungbuk National University)
  • Received : 2016.07.26
  • Accepted : 2016.08.23
  • Published : 2016.08.31

Abstract

Worldview-3 satellite sensor provides panchromatic image with high-spatial resolution and 8-band multispectral images. Therefore, an image-sharpening technique, which sharpens the spatial resolution of multispectral images by using high-spatial resolution panchromatic images, is essential for various applications of Worldview-3 images based on image interpretation and processing. The existing pansharpening algorithms tend to tradeoff between spectral distortion and spatial enhancement. In this study, we applied six pansharpening algorithms to Worldview-3 satellite imagery and assessed the quality of pansharpened images qualitatively and quantitatively. We also analyzed the effects of time lag for each multispectral band during the pansharpening process. Quantitative assessment of pansharpened images was performed by comparing ERGAS (Erreur Relative Globale Adimensionnelle de Synthèse), SAM (Spectral Angle Mapper), Q-index and sCC (spatial Correlation Coefficient) based on real data set. In experiment, quantitative results obtained by MRA (Multi-Resolution Analysis)-based algorithm were better than those by the CS (Component Substitution)-based algorithm. Nevertheless, qualitative quality of spectral information was similar to each other. In addition, images obtained by the CS-based algorithm and by division of two multispectral sensors were shaper in terms of spatial quality than those obtained by the other pansharpening algorithm. Therefore, there is a need to determine a pansharpening method for Worldview-3 images for application to remote sensing data, such as spectral and spatial information-based applications.

1. Introduction

Various remotely sensed satellite sensors, such as Kompsat-2/3/3A, Geoeye-1, QuickBird and Worldview-2, provide panchromatic image with high-spatial resolution and multispectral images. To acquire multispectral images with high-spatial resolution, researchers have developed pansharpening or image fusion algorithms; these algorithms sharpen the spatial resolution of multispectral images by using panchromatic images with high-spatial resolution (Alparone et al., 2006). Pansharpening technique is essential to various applications of remotely sensed images based on image interpretation and processing. The two major issues with the pansharpening process are spectral distortion and decrease of spatial information in the pansharpened image (Choi et al., 2011). Various algorithms proposed for solving these problems tend to tradeoff between spectral distortion and spatial enhancement. In particular, the development of FIHS (Fast Intensity-Hue-Saturation) fusion method, which can quickly merge huge volume of satellite data and be extended from three to four or more bands, has accelerated the technical advancement in the pansharpening field (Tu et al., 2004). Laben and Brower (2000) proposed the GS (Gram–Schmidt) pansharpening algorithm, which is implemented on ENVI (ENvironment for Visualizing Images) software. Choi et al. (2011) mentioned that a pansharpening algorithm can be divided into CS (Component Substitution)- and MRA (Multi-Resolution Analysis)-based methods depending on the method used for generating low-spatial-resolution intensity images. Pansharpened images produced using the CS-based method tend to distort spectral information when compared with those developed using the MRA-based method. On the other hand, the MRA-based pansharpened images show relatively decreased sharpness. Aiazzi et al. (2009) analyzed global and context-adaptive parameters of CS-based and MRA-based pansharpening algorithms, including the GS algorithm. Choi (2011) proposed an image fusion methodology for minimizing the local displacement by spatial difference among multispectral bands in a Worldview-2 image. Recently, Vivone et al. (2015) conducted quantitative comparison of about eighteen pansharpening algorithms. They also made the MATLAB (MATrix LABoratory) toolbox for easy comparisons with the state-of-the-art pansharpening algorithm to remote sensing community (Vivone et al., 2015).

In 2014, the Worldview-3 satellite sensor that provides panchromatic images with 0.3 m spatial resolution, 1.2 m multispectral resolution (8 bands) and 3.7 m short-wave infrared resolution (8 bands), was launched (Kruse and Perry, 2013). Worldview-3 images have higher spatial resolution than those obtained with other commercial satellite sensors, and their characteristics are similar to those of Worldview-2 images. In addition, as mentioned by Choi (2011), the pansharpening process of 8-band multispectral data should be different from that of 4-band data because of local displacement of spatial information by time lag. However, performance evaluation and related research for pansharpening of Worldview-3 imagery are weaker than those for pansharpening of basic 4-band multispectral images. In this study, various state-of-the-art pansharpening algorithms were applied to Worldview-3 satellite imagery, and the quality of pansharpened images was assessed qualitatively and quantitatively. In addition, by extension of experiments of Vivone et al. (2015), we analyzed effects of time lag for each 8-band of Worldview-3 image during the pansharpening process. Quantitative assessment of a pansharpened image was performed by using total four measures on real dataset. The organization of our manuscript is as follows. Section 2 presents an overview of pansharpening methods; Section 3 describes the workflow for evaluating pansharpening performance; Section 4 show the study area and data set; Section 5 illustrates experimental results using real data; and finally section 6 concludes the study results.

 

2. Overview of Pan-sharpening Algorithms

A general pansharpening algorithm can be defined as Eq. (1):

where : pansharpened image with high spatial resolution, : original multispectral image with low spatial resolution, wi: the coefficient for pansharpening, I: an intensity image, and P: panchromatic image with high spatial resolution.

The intensity image I differs depending on the type of pansharpening algorithm. In the CS-based method, the intensity image is obtained by combining each multispectral band, but, in the MRA-based method, the intensity image is generated by degrading the panchromatic image. In this study, we chose state-of-the-art algorithms based on the results of the comparative study by Vivone et al. (2015) for assessing pansharpening quality for Worldview-3 images. Detailed description is provided below.

2.1 BDSD (Band-Dependent Spatial-Detail with local parameter estimation)

BDSD algorithm optimizes the fusion parameter for injecting high frequency information of panchromatic image in a MMSE (Minimum Mean Square Error) sense. In BDSD, the fusion parameter is designed based on the MVU (Minimum-Variance-Unbiased) estimator using the panchromatic image, resampled multispectral bands, and spatially degraded multispectral bands (Garzelli et al., 2008). BDSD can be used in the local and global injection model, similar to the CS-based pansharpening model. Many researchers used this algorithm as a state-of-the-art method in the pansharpening field.

2.2 GSA (Gram-Schmidt Adaptive) algorithm

GSA pansharpening algorithm is an adaptive version of GS. Fusion parameters for the GSA are determined using statistical characteristics of images, such as variance of intensity image and covariance between intensity and multispectral band. Thereafter, an optimal intensity image is generated based on multiple linear regression between multispectral bands and the degraded panchromatic image. Aiazzi et al. (2009) have described this method in detail.

2.3 PRACS (Partial Replacement Adaptive Component Substitution) algorithm

Spectral distortion in the general CS-based pansharpening algorithm is caused by spectral dissimilarity between panchromatic and multispectral bands. Choi et al. (2011) derived band-dependent panchromatic image by using weighted summation of the original panchromatic and multispectral bands with the intensity image; they also proposed the optimal fusion parameter for minimizing the local spectral instability error. This algorithm has framework similar to the CS-based algorithm, but its performance for minimizing spectral distortion is more efficient than the CS- and MRA-based algorithms.

2.4 HPF (High Pass Filtering) algorithm

HPF is one of the simplest ways to sharpen a multispectral image. In the HPF algorithm, high-frequency information from panchromatic image, which is extracted using HPF, is injected into the multispectral image (Chavez et al. 1991). The fusion parameter for injection is determined by a statistical model. It is implemented in some commercial remote sensing software, such as ERDAS Imagine.

2.5 AWLP (Additive Wavelet Luminance Proportional) algorithm

In the MRA-based pansharpening algorithm, wavelet transformation is a representative technique to extract high-frequency information from panchromatic images. In wavelet transformation, panchromatic images with high-spatial resolution are decomposed into a set of low-spatial resolution images with corresponding spatial details, that is, the wavelet coefficients. By extracting wavelet coefficients, high-frequency information is directly injected into each multispectral band. In particular, the AWLP algorithm uses injection parameters based on the proportion between the average value of multispectral images and each multispectral band (Otazu et al., 2005).

2.6 GLP (Generalized Laplacian Pyramid) with MTF (Modulation Transfer Function)-matched filter and CBD (Context-Based Decision) model (referred to as MTF-GLP-CBD algorithm)

MTF-GLP-CBD is known as one of the efficient and representative MRA-based pansharpening algorithm. In MTF-GLP-CBD, a low-spatial panchromatic image is generated using a GLP to extract high-frequency information according to each image pyramid level (Vivone et al., 2015). In particular, the spatial filter of GLP is composed by exploiting the MTF. Meanwhile, the fusion parameter, which is known as the CBD method, is determined based on the standard deviation of multispectral images and low-resolution panchromatic images and their correlation.

 

3. Comparison Methodology for Evaluating the Pansharpening Quality of Worldview-3 Imagery

3.1 Quality estimation protocol of pansharpened images

Pansharpened images can be evaluated using the synthesis and consistency property (Palsson et al., 2016). In general, pansharpened images obtained using the original panchromatic and multispectral images do not have a reference for comparison purpose. This is a critical limitation for estimating pansharpening data. Therefore, each paradigm tries to solve this problem by using various techniques. First, the synthesis paradigm uses spatially degraded panchromatic and multispectral images. The amount of resolution reduced is defined as the ratio between original panchromatic and multispectral images. Spatial resolution of the pansharpened image obtained from spatially degraded images is identical to that of the original multispectral image. Therefore, the original image can be considered as the reference image for estimating the quality of the pansharpened image. Fig. 1 represents the synthesis protocol. An original multispectral image and a panchromatic image with spatial resolution of 1.2 m and 0.3 m, respectively, are spatially degraded at a spatial ratio using MTF (Vivone et al., 2015). Multispectral and panchromatic images with 4.8 m and 1.2 m spatial resolution, respectively, are generated. Finally, a pansharpened image acquired by degraded images has identical resolution to the original multispectral image.

Fig. 1.Framework of synthesis protocol

However, the use of synthesis protocol does not guarantee the quality of pansharpened image by real data, because most users do not pansharp spatially degraded data. So, some researchers proposed the consistency paradigm to use original multispectral image as reference data. Fig. 2 describes the consistency paradigm for evaluating pansharpened images. In the consistency protocol illustrated in Fig. 2, an original multispectral image and a panchromatic image with spatial resolution of 1.2 m and 0.3 m, respectively, are fused, and a pansharpened image with 0.3 m spatial resolution image is created. The pansharpened image is spatially degraded on 1.2 m resolution using MTF. Thus, a degraded pansharpened image that can be compared to the original multispectral image is obtained.

Fig. 2.Framework of consistency protocol

Meanwhile, QNR (Quality No References) metrics have been used in many researches for evaluation of pansharpened image. However, research of Palsson et al. (2016) concluded that QNR metrics are not efficient to estimation of quantitative quality for pansharpened image. In addition, because quantitative evaluation by synthesis and consistency protocol shows a similar trend to each other, researcher prefer consistency protocol based on real dataset (Palsson et al., 2016). Therefore, consistency protocol can be used as a reliable paradigm for estimating performance of pansharpening algorithm.

3.2 Quality estimation indices for pansharpened images

To evaluate the quality of pansharpened images, various measurements based on consistency and synthesis paradigms have been used. In this study, we applied four matrices of ERGAS (Erreur Relative Globale Adimensionnelle de Synthèse), SAM (Spectral Angle Mapper), Q-index and sCC (spatial Correlation Coefficient). Let us consider the fused and reference images F and R.

1) ERGAS: It estimates the relative global error of spectral information global quality. The equation for ERGAS is as follows (Vivone et al., 2015):

where h and l: the spatial resolution of panchromatic and multispectral images (e.g., h and l are 0.3m and 1.2m, respectively, for Worldview-3 imagery), N: the number of bands, and RMSE(Bi): the root mean square error between Fi and Ri at the ith band. Lower the ERGAS index, the lower the spectral distortion of the pansharpened image. 2) SAM: It is used in various applications for measuring the spectral difference among pixels, such as image classification, target detection and change detection. It quantifies the spectral angle difference between the corresponding pixels of F and R as per Eq. (3) below:

where < A,B> : the dot product between A and B, and ‖A‖: the norm of A. SAM index is acquired by averaging the SAM values over all the pixels (Vivone et al., 2015). If SAM approaches zero, it means that the pansharpened image is spectrally undistorted. 3) Q-index: Q-index, which is also known as UIQI (Universal Image Quality Index), was developed by Wang and Bovik (2000). It measures spectral distortion of pansharpened image as three factors: loss of correlation, luminance distortion and contrast distortion (Wang and Bovik, 2000). It is defined as Eq. (4):

where σA: the standard deviation of A, σA,B: the covariance of A and B, and A: the mean of A. Q-index is closer to one as spectral information of pansharpened image is more similar to that of the reference image. 4) sCC: sCC represents spatial quality of a pansharpened image, in contrast to ERGAS, SAM and Q-index which evaluate the spectral quality of a pansharpened image. sCC is a measure of similarity of high-frequency information between pansharpened and panchromatic images (Zhou et al., 1998). First, a Laplacian filter with a 3×3 window size is applied to each image for extracting high-frequency information. Thereafter, the correlation coefficient is calculated between extracted information from pansharpened and panchromatic images. It has a range of [0, 1]. The higher the sCC, the higher is the spatial quality of the pansharpened image.

3.3 Pansharpening process for Worldview-3 satellite imagery

The Worldview-3 satellite sensor includes two multispectral sensors (MS1 and MS2) with one panchromatic sensor. The MS1 sensor provides spectral channels of blue, green, red and NIR1, while the MS2 sensor provides coastal, yellow, red edge, and NIR2. The general high-spatial resolution satellite sensor has a time lag between multispectral and panchromatic sensors, because of the technical limitation. The time lag between MS1 and MS2 is 0.26 seconds, while that between multispectral and panchromatic image is 0.13 seconds in Worldview-3 (Gao et al., 2014). Therefore, when a pansharpening algorithm is applied to images acquired at different times, some fringes or artifacts of moving objects appear in the obtained pansharpened image, as shown in Fig. 3.

Fig. 3.An example of spatial dissimilarity by time lag among images

In this study, by applying the pansharpening process of Choi (2011), we analyzed the effect of time lag between MS1 and MS2 for each pansharpening algorithm. First, the original pansharpened image was generated by fusing the panchromatic and multispectral images (Fig.4(a)). In addition, MS1 and MS2 images were divided based on the time lag between MS1 and MS2. Then, each MS1 and MS2 image was pansharpened by using the panchromatic image. Finally, the 8-band pansharpened image was obtained by layer-stacking the two pansharpened images (Fig. 4(b)).

Fig. 4.Pansharpening process according to time lag: (a) 8-band pansharpening process, (b) pansharpening based on MS1 and MS2 division

 

4. Study Site and Data

In this study, Worldview-3 satellite imagery, which is obtained from DigitalGlobe, was used to evaluate the quality of pansharpened images. Table 1 describes the data specifications. As shown in Fig. 5, two sites were selected for experiments. Beolgyo (site 1) is a complex region, which is acquired at 2015/07/26, with urban and vegetated areas, and Jeongok (site 2) is a vegetated area, which is acquired at 2015/08/27, in Korea.

Table 1.Specifications of Worldview-3 satellite imagery

Fig. 5.Experimental data

 

5. Experimental Results and Discussion

As mentioned in section 2, the consistency paradigm was used to evaluate the quantitative quality of pansharpened images. Quantitative assessment of a pansharpened image was performed by comparing the ERGAS, SAM, Q-index and sCC. The MTF of Worldview-3 sensor for degrading the spatial resolution of pansharpened image was assumed to be 0.29 and 0.15 for multispectral and panchromatic images, respectively. In total, six pansharpening algorithms, such as BDSD, GSA, PRACS, HPF, AWLP and MTF-GLP-CBD, were selected, as described in section 2. In addition, the pansharpened images by using the division of MS1 and MS2 sensors was applied. Tables 2 and 3 represent the results of quantitative evaluation by assessing quality indices of the pansharpened image.

Table 2.Quantitative results of pansharpened images (Site 1)

Table 3.Quantitative results of pansharpened images (Site 2)

As shown in Tables 2 and 3, overall pansharpening results by MRA-based algorithm represent better spectral quality than those by CS-based algorithm. However, spatial quality showed no specific trends or difference between MRA- and CS-based algorithms. In addition, PRACS and HPF methods showed best performance for CS- and MRA-based algorithms, respectively, in terms of spectral quality when compared with other pansharpening algorithms. Regarding spatial quality, BDSD, GSA, and HPF showed higher sCC values than other algorithms. Spectral quality of the pansharpened images obtained from BDSD was poor in our study. This suggests that some state-of-the-art pansharpening algorithm could generate pansharpened images of low quality. Meanwhile, spectral quality of the 8-band pansharpening process was better than the pansharpening based on MS1 and MS2 division, while the spatial quality showed the reverse, in CS-based methods. In the case of pansharpening based on MS1 and MS2 division, intensity image have only spatial detail about MS1 or MS2, while intensity image by 8-band pansharpening process includes spatial characteristics of both MS1 and MS2 by time lag. Therefore, during the injection of spatial details in the pansharpening based on MS1 and MS2 division, spatial characteristics between intensity and multispectral image can be more efficiently offset by subtraction operation of the Eq. (1), compared with 8-band pansharpening process. It mean that pansharpened image by Fig. 4(b) only include spatial details panchromatic images, while result by Fig. 4(a) include some artifacts by time lag of multispectral bands. On the other hand, with the MRA-based method, the 8-band pansharpening process and pansharpening based on division of MS1 and MS2 afforded similar outcomes; this could be because spatial characteristics about multispectral bands do not affect spatial quality of pansharpened image MRA-based method. Therefore, MRA-based algorithms are independent of MS1 and MS2 division during the pansharpening process. Especially, pansharpened image MRA-based method seems to be of mixed spatial details of panchromatic and multispectral bands. Fig. 6 represents details of the pansharpened image for evaluating the visual and qualitative quality in Beolgyo. Almost all pansharpening results show similar color. However, the images from BDSD and GSA algorithms are clearer than those by PRACS and MRA-based algorithms, suggesting that quantitative analysis of pansharpened images may be different from visual inspection of a Worldview-3 image. It indicates that CS-based algorithm has better spatial-based application. The difference in pansharpening quality of spatial information among algorithms is remarkable when MS1 and MS2 division is applied for pansharpening images. Fig. 7 illustrates details of pansharpening results. As shown in Fig. 7(b), pansharpening image obtained by the CS-based method with MS1 and MS2 division, when compared with other algorithms, do not contain any artifact or blurring at area of moving objects; this indicates that pansharpening with MS1 and MS2 division can be effective for image interpretation or feature detection in Worldview-3 imagery.

Fig. 6.Pansharpening result according to each algorithms (R: red, G: green, B: blue)

Fig. 7.Detailed image of pansharpening result according to each algorithms (R: red edge, G: yellow, B: green)

 

6. Conclusion

This study conducted a quantitative and qualitative comparison among pansharpened images based on CS- and MRA-based algorithms for Worldview-3 satellite imagery. After comparing various estimation paradigms for pansharpened image quality, the consistency paradigm was decided as the experimental methodology. Thereafter, six pansharpening algorithms were applied to Worldview-3 image. In addition, to analyze the effects of time lag between multispectral images, the original 8-band pansharpening process and the pansharpening method based on MS1 and MS2 division were applied. In this experiment, qualitative quality of spectral information was similar to each other, while quantitative results obtained by MRA-based algorithm were better than those by the CS-based algorithm. In addition, images obtained by the CS-based algorithm were shaper in terms of spatial quality than those obtained by the MRA-based algorithm. In particular, images obtained by division of MS1 and MS2 have an advantage of sharp quality of moving targets. Therefore, for application of remote sensing data (e.g., CS-based pansharpening for image interpretation and MRA-based pansharpening for land cover classification), an effective pansharpening algorithm should be selected for optimal utilization of Worldview-3 imagery.

References

  1. Aiazzi, B., Baronti, S., Lotti, F., and Selva, M. (2009), A comparison between global and context-adaptive pansharpening of multispectral images, IEEE Geoscience and Remote Sensing Letters, Vol. 6, No. 2, pp.302-306. https://doi.org/10.1109/LGRS.2008.2012003
  2. Alparone, L., Wald, L., Chanussot, J., Thomas, C., Gamba, P., and Bruce, L. M. (2006), Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data fusion contest, IEEE Transactions on Geoscience and Remote Sensing, Vol. 45, No. 10, pp.3012-3021. https://doi.org/10.1109/TGRS.2007.904923
  3. Chavez P. S. Jr., Sides, S. C., and Anderson, J. A. (1991), Comparison of three different methods to merge multiresolution and multispectral data: Landsat TM and SPOT panchromatic, Photogrammetric Engineering & Remote Sensing, Vol. 57, No. 3, pp.295-303.
  4. Choi, J. (2011), A Worldview-2 satellite imagery pansharpening algorithm for minimizing the effects of local displacement, Journal of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography, Vol. 29, No. 6, pp.577-582. (in Korean with English abstract) https://doi.org/10.7848/ksgpc.2011.29.6.577
  5. Choi, J., Yu, K., and Kim, Y. (2011), A new adaptive component-substitution based satellite image fusion by using partial replacement, IEEE Transactions on Geoscience and Remote Sensing, Vol. 49, No. 1, pp.295-309. https://doi.org/10.1109/TGRS.2010.2051674
  6. Garzelli, A., Nencini, F., and Capobianco, L. (2008), Optimal MMSE pan sharpening of very high resolution multispectral images, IEEE Transactions on Geoscience and Remote Sensing, Vol. 46, No. 1, pp.228-236. https://doi.org/10.1109/TGRS.2007.907604
  7. Gao, F., Li, B., Xu, Q., and Zhong, C. (2014), Moving vehicle information extraction from single-pass Worldview-2 imagery based on ERGAS-SNS analysis, Remote Sensing, Vol. 6, No. 7, pp.6500-6523. https://doi.org/10.3390/rs6076500
  8. Kruse, F. A. and Perry, S. L. (2013), Mineral mapping using simulated Worldview-3 Short-Wave-Infrared Imagery, Remote Sensing, Vol. 5, No. 6, pp.2688-2703. https://doi.org/10.3390/rs5062688
  9. Laben, C. A. and Brower, B. V. (2000), Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening, U.S. Patent 6011875, Eastman Kodak Company.
  10. Otazu, X., González-Audícana, M., Fors, O., and Núñez, J. (2005), Introduction of sensor spectral response into image fusion methods, Application to wavelet-based methods, IEEE Transactions on Geoscience and Remote Sensing, Vol. 43, No. 10, pp.2376-2385. https://doi.org/10.1109/TGRS.2005.856106
  11. Palsson, F., Sveinsson, J. R., Ulfarsson, M. O., and Benediktsson, J. A. (2016), Quantitative quality evaluation of pansharpened imagery: consistency versus synthesis, IEEE Transactions on Geoscience and Remote Sensing, Vol. 54, NO. 3, pp.1247-1259. https://doi.org/10.1109/TGRS.2015.2476513
  12. Tu, T.M., Huang, P. S., Hung, C. L., and Chang, C. P. (2004), A fast intensity-hue-saturation fusion technique with spectral adjustment for IKONOS imagery, IEEE Geoscience and Remote Sensing Letters, Vol. 1, No. 4, pp.309-312. https://doi.org/10.1109/LGRS.2004.834804
  13. Vivone, G., Alparone, L., Chanussot, J., Dalla Mura, M., Garzelli, A., Licciardi, G. A., Restaino, R., and Wald, L. (2015), A critical comparison among pansharpening algorithms, IEEE Transactions on Geoscience and Remote Sensing, Vol. 53, No. 5, pp.2565-2586. https://doi.org/10.1109/TGRS.2014.2361734
  14. Wang, Z. and Bovik, A. C. (2002), A universal image quality index, IEEE Signal Processing Letters, Vol. 9, No. 3, pp.81-84. https://doi.org/10.1109/97.995823
  15. Zhou, J., Civco, D.L., and Silander, J.A. (1998), A wavelet transform method to merge Landsat TM and SPOT panchromatic data, International Journal of Remote Sensing, Vol. 19, No. 4, pp.743–757. https://doi.org/10.1080/014311698215973

Cited by

  1. A Hybrid Pansharpening Algorithm of VHR Satellite Images that Employs Injection Gains Based on NDVI to Reduce Computational Costs vol.9, pp.10, 2017, https://doi.org/10.3390/rs9100976