DOI QR코드

DOI QR Code

Robust Radiometric and Geometric Correction Methods for Drone-Based Hyperspectral Imaging in Agricultural Applications

  • Hyoung-Sub Shin (ERI Inc.) ;
  • Seung-Hwan Go (Department of Agricultural and Rural Engineering, Chungbuk National University) ;
  • Jong-Hwa Park (Department of Agricultural and Rural Engineering, Chungbuk National University)
  • Received : 2024.05.10
  • Accepted : 2024.06.12
  • Published : 2024.06.30

Abstract

Drone-mounted hyperspectral sensors (DHSs) have revolutionized remote sensing in agriculture by offering a cost-effective and flexible platform for high-resolution spectral data acquisition. Their ability to capture data at low altitudes minimizes atmospheric interference, enhancing their utility in agricultural monitoring and management. This study focused on addressing the challenges of radiometric and geometric distortions in preprocessing drone-acquired hyperspectral data. Radiometric correction, using the empirical line method (ELM) and spectral reference panels, effectively removed sensor noise and variations in solar irradiance, resulting in accurate surface reflectance values. Notably, the ELM correction improved reflectance for measured reference panels by 5-55%, resulting in a more uniform spectral profile across wavelengths, further validated by high correlations (0.97-0.99), despite minor deviations observed at specific wavelengths for some reflectors. Geometric correction, utilizing a rubber sheet transformation with ground control points, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was assessed using root mean square error(RMSE) analysis, revealing minimal errors in both east-west(0.00 to 0.081 m) and north-south directions(0.00 to 0.076 m).The overall position RMSE of 0.031 meters across 100 points demonstrates high geometric accuracy, exceeding industry standards. Additionally, image mosaicking was performed to create a comprehensive representation of the study area. These results demonstrate the effectiveness of the applied preprocessing techniques and highlight the potential of DHSs for precise crop health monitoring and management in smart agriculture. However, further research is needed to address challenges related to data dimensionality, sensor calibration, and reference data availability, as well as exploring alternative correction methods and evaluating their performance in diverse environmental conditions to enhance the robustness and applicability of hyperspectral data processing in agriculture.

Keywords

1. Introduction

Hyperspectral sensors, capable of capturing high-resolution spectral data across numerous bands, have emerged as a transformative tool for precision agriculture. This rich spectral information provides a detailed “fingerprint” of crop health, enabling the early identification of nutrient deficiencies, water stress, pests, diseases, and quality issues(Asaari et al., 2018; Lu et al., 2020; Liu et al., 2021; Nguyen et al., 2021).While traditionally acquired using satellites, aircraft, or ground-based platforms, these methods often face limitations in spatial and temporal resolution, cost, and susceptibility to weather conditions.

Drone-mounted hyperspectral sensors (DHSs) offer a compelling alternative, revolutionizing remote sensing by providing a cost-effective and flexible platform for acquiring high-resolution spectral data. Their ability to capture data at low altitudes mitigates cloud and atmospheric interference, enhancing their utility in agricultural monitoring and management (Tian et al., 2021). Furthermore, DHSs effectively address the trade-off between spatial and spectral resolution often encountered with other platforms.

However, pre-processing drone-acquired hyperspectral data presents unique challenges. Sensor noise, atmospheric effects, and variations in solar radiation necessitate careful correction (Liu and Xiao, 2014). Geometric correction, crucial for aligning hyperspectral data with other geospatial datasets, ensures accurate crop health mapping and yield estimation (Nex and Remondino, 2014; Mulla, 2013). Variations in sensor orientation during flight can lead to misalignment of spectral and spatial information, hindering these tasks (Zeng et al., 2017; Bhojaraja et al., 2015). While inertial measurement unit(IMU) data can aid in geometric correction, complex flight maneuvers or challenging environmental conditions can limit its effectiveness (Turner et al., 2017; Ekaso et al., 2020).

In addition to geometric correction, other pre-processing challenges include data dimensionality, sensor calibration, and reference data availability. The vast number of spectral bands in hyperspectral data can create computational difficulties and visualization challenges (Wei et al., 2018). Inconsistent sensor calibration can lead to errors in data analysis(Poddar et al., 2017). Moreover, limited access to reference data, such as ground truth measurements or atmospheric profiles, and the computational demands of processing large datasets can further hinder the widespread adoption of hyperspectral imaging in agriculture (Banerjee et al., 2020; Suomalainen et al., 2021).

Radiometric correction, which addresses variations in pixel intensities caused by sensor noise, atmospheric effects, and illumination conditions, is equally important (Yu et al., 2022). These variations can obscure true spectral signatures, leading to misclassification and inaccurate quantification in applications like vegetation health mapping and water quality assessment (Thomas et al., 2018).

In South Korea, the adoption of drone-acquired hyperspectral data faces additional limitations due to the complexity of data processing and the high cost of sensors and analysis tools, potentially hindering accessibility for many farms (Sishodia et al., 2020; Cucho Padin et al., 2020). These challenges are amplified in the context of DHSs, as accurate ground control points or high-resolution digital elevation models (DEMs) required for geometric correction may be scarce in remote areas (Aasen et al., 2018), and the dynamic nature of drones can introduce complex distortions that are difficult to model. Radiometric correction is further complicated by varying atmospheric effects and changes in sensor response over time (Gao et al., 2009; Li et al., 2015).

Despite these challenges, significant advancements have been made in geometric and radiometric correction techniques using approaches like structure from motion (SfM), multi-view stereo (MVS), radiative transfer models, and empirical line calibration (Turner et al., 2014; Gao et al., 2009). However, further research is needed to develop more robust, efficient, and user-friendly tools to effectively address the complexities of drone-based hyperspectral imaging. By investigating methods to mitigate these distortions, this study aims to ensure accurate spatial representation and reliable spectral information, ultimately unlocking the full potential of hyperspectral data for optimizing crop health monitoring and management practices.

2. Materials and Methods

2.1. Study Area

This study focuses on the preprocessing of drone-based hyperspectral images, emphasizing geometric and radiometric correction. A managed field (35°48′29″N, 127°02′50″E) within the National Institute of Agricultural Sciences in Iseo-myeon, Wanju-gun, Jeollabuk-do, South Korea (Fig. 1) was selected as the test site. This 2,700 m2 field was divided into parcels planted with Kimchi cabbage and soybeans (serving as separation zones). Kimchi cabbage was the primary crop of interest for evaluating the impact of preprocessing techniques on hyperspectral data analysis.

OGCSBN_2024_v40n3_257_3_f0001.png 이미지

Fig. 1. Study area.(a)Location of the test field (35°48′29″N,127°02′50″E) within the National Institute of Agricultural Sciences. (b) Cultivation status of the Kimchi cabbage.

2.2. Drone Selection

To mitigate image distortion and address limitations of traditional platforms, a Matrice 600 Pro (DJI, Shenzhen, China) drone equipped with a hyperspectral sensor was chosen (Fig. 2). This drone’s substantial payload capacity and precise data collection capabilities make it well-suited for acquiring high-resolution, high-quality remote sensing data over agricultural fields.

OGCSBN_2024_v40n3_257_3_f0002.png 이미지

Fig. 2. The appearance of the drone remote sensing system used in this study. (a) Matrice 600 Pro drone with hyperspectral system. (b) Detailed equipment of the hyperspectral system.

2.3. Hyperspectral Sensor

Data acquisition was conducted using a microHSI™ 410 SHARK hyperspectral sensor (Corning Inc., New York, NY, USA) mounted on the Matrice 600 Pro drone (Table 1). Operating in the VNIR range (400–1000 nm), this sensor captures 150 bands at 4 nm intervals, ideal for detailed crop health analysis (Fig. 3). Its push-broom type line scan design and 25-minute maximum flight time necessitated careful flight planning, resulting in 12 established flight lines for comprehensive coverage (Figs. 3a, b).

OGCSBN_2024_v40n3_257_3_f0003.png 이미지

Fig. 3. Hyperspectral data acquisition. (a) Image capture lines. (b) Flight path setting.

Table 1. Specifications of the microHSI™ 410 SHARK hyper-spectral sensor

OGCSBN_2024_v40n3_257_3_t0001.png 이미지

2.4. Pre-Processing Hyperspectral Imagery for Orthoimage Generation

Raw drone-acquired hyperspectral data underwent several preprocessing steps to transform it into accurate orthoimages suitable for analysis (Fig. 4). Initially, radiometric correction addressed sensor noise and variations in solar irradiance by converting raw digital numbers (DN) to radiance values. This crucial step for hyperspectral sensors ensures consistent spectral information by removing atmospheric effects and converting radiance values to percentage surface reflectance. While both absolute and relative correction methods exist, this study employed absolute correction due to the consistent observation period, utilizing ground truth values collected concurrently with sensor measurements. Atmospheric correction, typically accounting for atmospheric absorption and scattering effects, was not performed due to stable weather conditions during data acquisition.

OGCSBN_2024_v40n3_257_4_f0001.png 이미지

Fig. 4. Flowchart of hyperspectral image pre-processing for orthoimage generation. This flowchart summarizes the key steps involved in pre-processing drone-acquired hyperspectral imagery to generate orthoimages suitable for further analysis. (a) Data collection. (b) Geometric calibration. (c) Mosaicking. (d) Radiometric calibration.

Geometric correction, essential for orthoimage generation, rectified distortions in the spatial arrangement of pixels caused by variations in sensor orientation during flight. Integrating IMU data with ground control points (GCPs) facilitated image georeferencing, enabling geometric correction algorithms to transform the image into a geographically accurate representation. Geometric inaccuracies significantly impact spatial information in hyperspectral data, necessitating the minimization of distortion and accurate alignment of each pixel to its true plane coordinates (x, y).

Drone-based hyperspectral imaging, which utilizes line scanning, can introduce grid deformation due to location inaccuracies. The high volume of spectral information in hyperspectral images, combined with potential wind and light variations during capture, can also introduce local distortions. To address these challenges, this study employed the rubber-sheet transformation method, a non-linear approach that allows for pixel-wise coordinate transformation based on varying degrees of movement (Fig. 5). This method is particularly effective for correcting local distortions, minimizing deformation in areas distant from the reference points. High-resolution RGB images captured during ortho-correction were used to establish GCPs as reference points for the subsequent geometric correction of each hyperspectral image. The transverse mercator(TM) ellipsoid and WGS84 UTM Zone 52N were chosen as the projection method and coordinate system, respectively, ensuring data compatibility and usability.

OGCSBN_2024_v40n3_257_4_f0002.png 이미지

Fig. 5. Geometric correction with the rubber-sheet method: a conceptual diagram.

Finally, image mosaicking was employed to combine the multiple hyperspectral image tiles captured during the flight into a seamless orthoimage. This step provided complete coverage of the study area and facilitated large-scale analysis. Due to the limited field of view (FOV) of drone imagery, capturing and stitching multiple images was necessary. Image registration in this study prioritized four criteria: targeting Kimchi cabbages, selecting an image with a large portion of the target area, minimizing overlap in crop areas during matching, and maximizing overlap in non-observable objects.

2.5. Accuracy Evaluation

Geometric correction accuracy was assessed by comparing the corrected image to field survey data at GCPs and calculating the root mean square error (RMSE) for east-west and north-south directions. RMSE quantifies differences between predicted and reference coordinates, calculated as:

\(\begin{align}R M S E=\sqrt{\frac{1}{n}} \sum_{i=1}^{n}\left(x_{i, \text { prediced }}-x_{i, \text { refernence }}\right)^{2}+\left(y_{i, \text { predicted }}-y_{i, \text { reference }}\right)^{2}\end{align}\)       (1)

where n is the number of GCPs, and (x, y) are the coordinates of each GCP.

3. Results and Discussion

3.1.Image Pre-Processing: Segmentation

The drone-acquired hyperspectral image, captured using the line scanning method (Figs. 3 and 6), was obtained with the Corning® microHSITM 410 SHARK sensor, which has a fixed storage capacity of 682 rows and 40,000 columns. Due to this limitation, the sensor automatically generates a new file when this capacity is reached, ensuring continuous data capture. The resulting hyperspectral images are stored in band interleaved by pixel (BIP) format with the HSI extension (Fig. 6). Simultaneously, the drone’s embedded global positioning system (GPS) receiver records the latitude and longitude of each pixel, saving this geospatial information in a separate IGM file.

OGCSBN_2024_v40n3_257_5_f0001.png 이미지

Fig. 6. Raw hyperspectral data structure and format from drone-mounted microHSITM 410 SHARK sensor. (a) Rotated section. (b) HSI (Band interleaved by pixel) file. (c) IGM (Latitude and longitude) file.

As an initial pre-processing step, the acquired hyperspectral image was segmented into individual lines to create images containing only spatial information (Fig. 7). Two flight lines were necessary to cover the entire test field due to the sensor’s storage limitations(Figs. 7a, b). Image segmentation, guided by the GPS receiver’s position file, isolated the desired flight lines. Areas outside the designated flight path, such as the rotated section at the bottom of Fig. 7(b), were excluded during segmentation to streamline further analysis. This process resulted in a segmented image composed of nine distinct scenes, each representing a specific portion of the test field captured by the drone sensor. These nine scenes were then used for subsequent analysis.

OGCSBN_2024_v40n3_257_5_f0002.png 이미지

Fig. 7. Segmented hyperspectral image lines from drone-acquired data showing nine sections of the test field. (a) Line 1 (Sections 1–6). (b) Line 2 (Sections 7–9).

3.2. Radiometric Correction Using the Empirical Line Method

Radiometric correction, a crucial pre-processing step, was employed to eliminate unwanted variations in drone-acquired hyperspectral data caused by sensor noise, fluctuations in solar irradiance, and atmospheric effects. The empirical line method (ELM), chosen for its simplicity and effectiveness, was utilized for this purpose. ELM relies on strategically placed spectral reference panels (SRPs) with known, uniform spectral reflectance properties across the measured wavelength range. Typically made of Spectralon®, these SRPs offer high and uniform reflectance in the VNIR region (Fig. 8). In this study, four calibration targets with varying reflectance values(5%, 22%, 44%, and 55%)served as SRPs, evenly distributed within the drone sensor’s FOV throughout the image, avoiding shadowed or obstructed areas.

OGCSBN_2024_v40n3_257_6_f0001.png 이미지

Fig. 8. Radiometric calibration of drone-acquired hyperspectral imagery using spectral reflectance panels: relationship between reflectance and digital number (DN) for Band 1.

For each SRP, a region of interest (ROI) was defined, encompassing a representative area excluding background elements. The average spectral reflectance for each band within these ROIs was calculated across all SRPs, representing the reference material’s average spectral response. This established a linear relationship between the sensor’s DN values and the average spectral reflectance values, visualized as a scatter plot (DN on the x-axis, reflectance on the y-axis). Ideally, a strong linear correlation between these values is observed for accurate correction.

Based on this relationship, an empirical equation of the form: Radiance (λ) = a(λ) * DN(λ) + b(λ), was derived to convert DN values of each pixel to radiance values. Here, λ represents the wavelength, and a(λ) and b(λ) are the slope and intercept coefficients, respectively, obtained from linear regression analysis. Applying this equation to all pixels effectively removes sensor-specific biases and solar irradiance variations, yielding a radiometrically corrected image with a more accurate representation of true surface reflectance.

Table 2 presents a sample of the reflectance conversion equation coefficients calculated for the 150 spectral bands acquired by the sensor using ELM. Fig. 9 illustrates the impact of radiometric correction on reflectance values across a subset of wavelengths. Uncorrected data (Fig. 9a) may not accurately represent the true spectral properties of the crops due to sensor response variations and atmospheric effects. After correction (Fig. 9b), sensor noise and atmospheric effects are removed, resulting in smoother curves with more consistent reflectance values across wavelengths. This correction is crucial in hyperspectral image pre-processing, ensuring that the captured reflectance data accurately reflects the inherent spectral characteristics of the crops.

Table 2. Radiometric calibration coefficients and regression equations for drone-acquired hyperspectral imagery

OGCSBN_2024_v40n3_257_6_t0001.png 이미지

OGCSBN_2024_v40n3_257_7_f0001.png 이미지

Fig. 9. Effect of radiometric correction on reflectance values across a subset of wavelengths from drone-acquired hyperspectral imagery: (a) before correction and (b) after correction.

3.3. Geometric Correction

Geometric correction is a critical pre-processing step to rectify distortions in hyperspectral imagery introduced during data acquisition or processing. These distortions, stemming from variations in sensor orientation, platform motion, terrain variations, or image processing steps, can alter the appearance of target objects, making them appear stretched, compressed, or uneven. Accurate location information is essential for integrating hyperspectral imagery into geographic information system (GIS) for further analysis.

3.3.1. Rubber Sheet Transformation and GCPs

To achieve accurate geometric correction, this study employed a rubber sheet transformation method on each of the nine segmented hyperspectral images representing smaller regions of the entire test field (Figs. 5 and 7). This segmentation likely enhanced the precision of the correction process. The rubber sheet transformation requires a set of GCPs with well-defined locations. In this study, GCPs were established based on reference imagery, with a minimum of 21 and a maximum of 72 points used per scene, totaling 287 GCPs across all nine segmented images (Table 3). The distribution of these GCPs within each scene is visualized in Fig. 10, where yellow markers overlaid on the segmented hyperspectral image represent the reference locations with known coordinates used to rectify distortions.

OGCSBN_2024_v40n3_257_7_f0002.png 이미지

Fig. 10. Segmented hyperspectral imagery with ground control points (GCPs) for geometric correction, divided into nine sub-sections for individual processing. (a) Line 1 (Sections 1–6). (b) Line 2 (Sections 7–9).

Table 3. Number of GCPs used for geometric correction of segmented hyperspectral imagery scenes

OGCSBN_2024_v40n3_257_8_t0001.png 이미지

3.3.2. Impact on Spectral Reflectance

Fig. 11 presents a comparison of reflectance values for a selected ROI within the test field before and after geometric correction. The graph displays reflectance values at four different percentages (5%, 22%, 44%, and 55%) across a range of wavelengths. The dotted lines represent reflectance values before correction, potentially exhibiting variations due to geometric distortions in the image, which could impact the accuracy of spectral reflectance measurements.

OGCSBN_2024_v40n3_257_8_f0001.png 이미지

Fig. 11. Comparison of reflectance values before and after geometric correction using spectral reference panels with varying reflectance: (a) 5%, (b) 22%, (c) 44%, and (d) 55%.

In contrast, the solid lines represent reflectance values after geometric correction. The correction process rectifies distortions, resulting in smoother and more consistent curves across wavelengths for all four reflectance percentages, enhancing the reliability of spectral reflectance data for further analysis. Geometric correction is thus crucial for ensuring the accuracy of spectral reflectance measurements in hyperspectral imagery, enabling researchers to obtain reliable spectral information from the ROI and perform accurate measurements of distances, areas, and other spatial features within the crop field.

3.4. Generation of Orthoimage Mosaic

Fig. 12 presents the mosaicked orthoimage generated from the geometrically corrected hyperspectral imagery. This orthoimage, a georeferenced representation of the Earth’s surface in a true planimetric (top-down) perspective, eliminates distortions caused by sensor tilt or terrain variations, providing a comprehensive and accurate representation of the entire test field. The mosaic incorporates multiple orthoimages generated from the individual segmented hyperspectral scenes (Fig. 10) following geometric correction. This seamless orthoimage, potentially including a reference grid with latitude and longitude coordinates and a scale bar for spatial referencing, is suitable for further analysis using GIS or otherspatial analysis tools. The generation of this precise and geometrically accurate representation of the study area is essential for subsequent analyses and applications.

OGCSBN_2024_v40n3_257_9_f0001.png 이미지

Fig. 12. Mosaicked orthoimage of the study area showing Kimchi cabbage and soybean fields.

3.5. Evaluation of Geometric Correction Accuracy

To assess the accuracy of the geometric correction, 100 reference points were strategically and evenly distributed across the test field. These points, marked by manually setting the center point of clearly visible Kimchi cabbages in the original images (100 yellow triangles in Fig. 13), were then compared to their corresponding locations in the geometrically corrected orthoimage. Fig. 13 illustrates this evaluation process by comparing the easting (x) and northing (y) coordinates of these reference points, measured with a high-accuracy GPS unit, with their corresponding coordinates extracted from the orthoimage. Ideally, the differences between these coordinate pairs should be minimal, indicating high geometric correction accuracy.

OGCSBN_2024_v40n3_257_9_f0002.png 이미지

Fig. 13. Evaluation of geometric correction accuracy: comparison of RGB and hyperspectral images with tie points highlighting alignment.

Table 4 details the RMSE values in both x and y directions for each segmented scene. RMSE serves as a statistical measure to quantify the overall accuracy of the geometric correction process. Ideally, these values should be low, typically below 0.5 meters for good accuracy. The analysis revealed errors in the east-west direction ranging from 0.0 m to 0.081 m, and errors in the north-south direction ranging from 0.0 m to 0.076 m, all well below the 0.5-meter threshold. Moreover, the overall position RMSE across all 100 points was 0.031 meters, confirming a high degree of geometric correction accuracy, likely due to the sufficient number of GCPs used per scene (Table 3). This value also satisfies the 1/500 plane description tolerance of 0.28 meters specified in the “Aerial Photogrammetry Work Regulations,” indicating successful geometric correction according to industry standards.

Table 4. Assessment of geometric correction accuracy using ground control points (GCPs): reference and hyperspectral image coordinates with positional errors and root mean square error (RMSE)

OGCSBN_2024_v40n3_257_10_t0001.png 이미지

4. Conclusions

This study investigated methods to improve the accuracy of drone-acquired hyperspectral data for field smart agriculture applications in South Korea, focusing on addressing the challenges of radiometric and geometric distortions in pre-processing. While hyperspectral sensors hold immense potential for crop health monitoring due to their ability to capture detailed spectral information, challenges remain in noise reduction, geometric correction, and the complexity and cost of data processing techniques. Our findings demonstrate the successful implementation of radiometric correction using the ELM, effectively removing sensor noise, solar irradiance variations, and atmospheric effects, resulting in a more accurate representation of surface reflectance. Geometric correction, utilizing a rubber sheet transformation with GCPs, successfully rectified distortions caused by sensor orientation and flight path variations, ensuring accurate spatial representation within the image. The effectiveness of geometric correction was validated by achieving an overall positional RMSE of 0.031 meters, well below the industry standard of 0.28 meters.

These results underscore the importance of accurate preprocessing for unlocking the full potential of drone-acquired hyperspectral data in smart agriculture. By mitigating radiometric and geometric distortions, reliable spectral information can be obtained for precise crop state assessment and other applications requiring accurate geospatial data. This study contributes to the field by showcasing successful pre-processing techniques applicable to hyperspectral data from drone platforms.

However, this study did not address the remaining challenges of data dimensionality, sensor calibration, and reference data availability. Future research should focus on developing efficient methods to tackle these limitations, further enhancing the usability and accessibility of hyperspectral data for a wider range of smart agriculture applications. Additionally, exploring alternative correction methods and evaluating their performance in various environmental conditions would contribute to the robustness and applicability of hyperspectral data processing in diverse agricultural settings.

Acknowledgments

We are very grateful to the experts for our appropriate and constructive suggestions to improve this paper.

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

References

  1. Aasen, H., Honkavaara, E., Lucieer, A., and Zarco-Tejada, P. J., 2018. Quantitative remote sensing at ultra-high resolution with UAV spectroscopy: A review of sensor technology, measurement procedures, and data correction workflows. Remote Sensing, 10(7), 1091. https://doi.org/10.3390/rs10071091 
  2. Asaari, M. S. M., Mishra, P., Mertens, S., Dhondt, S., Inze, D., Wuyts, N., and Scheunders, P., 2018. Close-range hyperspectral image analysis for the early detection of stress responses in individual plants in a high-throughput phenotyping platform. ISPRS Journal of Photogrammetry and Remote Sensing, 138, 121-138. https://doi.org/10.1016/j.isprsjprs.2018.02.003 
  3. Banerjee, B. P., Raval, S., and Cullen, P. J., 2020. UAV-hyperspectral imaging of spectrally complex environments. International Journal of Remote Sensing, 41(11), 4136-4159. https://doi.org/10.1080/01431161.2020.1714771 
  4. Bhojaraja, B. E., Hegde, G., Pruthviraj, U., Shetty, A. B., and Nagaraj, M.K., 2015. Mapping agewise discrimination of arecanut crop water requirement using hyperspectral remote sensing. Aquatic Procedia, 4, 1437-1444. https://doi.org/10.1016/j.aqpro.2015.02.186 
  5. Cucho Padin, G., Loayza, H., Palacios, S., Balcazar, M., Carbajal, M., andQuiroz,R., 2020.Development of low-cost remote sensing tools and methods for supporting smallholder agriculture. Applied Geomatics, 12, 247-263. https://doi.org/10.1007/s12518-019-00292-5 
  6. Ekaso, D., Nex, F., and Kerle, N., 2020. Accuracy assessment of real-time kinematics(RTK) measurements on unmanned aerial vehicles (UAV) for direct geo-referencing. Geospatial Information Science, 23(2), 165-181. https://doi.org/10.1080/10095020.2019.1710437 
  7. Gao, B. C., Montes, M. J., Davis, C. O., and Goetz, A. F., 2009. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sensing of Environment, 113, S17-S24. https://doi.org/10.1016/j.rse.2007.12.015 
  8. Li, H. W., Zhang, H., Chen, Z. C., and dZhang, B., 2015. A method suitable for vicarious calibration of a UAV hyperspectral remote sensor. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 8(6), 3209-3223. https://doi.org/10.1109/JSTARS.2015.2416213 
  9. Liu, N., Townsend, P. A., Naber, M. R., Bethke, P. C., Hills, W. B., and Wang, Y., 2021. Hyperspectral imagery to monitor crop nutrient status within and across growing seasons. Remote Sensing of Environment, 255, 112303. https://doi.org/10.1016/j.rse.2021.112303 
  10. Liu, Q., and Xiao, S., 2014. Effects of spectral resolution and signal-to-noise ratio of hyperspectral sensors on retrieving atmospheric parameters. Optics Letters, 39(1), 60-63. https://doi.org/10.1364/OL.39.000060 
  11. Lu, B., Dao, P. D., Liu, J., He, Y., and Shang, J., 2020. Recent advances of hyperspectral imaging technology and applications in agriculture. Remote Sensing, 12(16), 2659. https://doi.org/10.3390/rs12162659 
  12. Mulla, D. J., 2013. Twenty five years of remote sensing in precision agriculture: Key advances and remaining knowledge gaps. Biosystems Engineering, 114(4), 358-371. https://doi.org/10.1016/j.biosystemseng.2012.08.009 
  13. Nex, F., and Remondino, F., 2014. UAV for 3D mapping applications: A review. Applied Geomatics, 6, 1-15. https://doi.org/10.1007/s12518-013-0120-x 
  14. Nguyen, C., Sagan, V., Maimaitiyiming, M., Maimaitijiang, M., Bhadra, S., and Kwasniewski, M. T., 2021. Early detection of plant viral disease using hyperspectral imaging and deep learning. Sensors, 21(3), 742. https://doi.org/10.3390/s21030742 
  15. Poddar, S., Kumar, V., and Kumar, A., 2017. A comprehensive overview of inertial sensor calibration techniques. Journal of Dynamic Systems, Measurement, and Control, 139(1), 011006. https://doi.org/10.1115/1.4034419 
  16. Sishodia, R. P., Ray, R. L., and Singh, S. K., 2020. Applications of remote sensing in precision agriculture: A review. Remote Sensing, 12(19), 3136. https://doi.org/10.3390/rs12193136 
  17. Suomalainen, J., Oliveira, R. A., Hakala, T., Koivumaki, N., Markelin, L., Nasi, R., and Honkavaara, E., 2021. Direct reflectance transformation methodology for drone-based hyperspectral imaging. Remote Sensing of Environment, 266, 112691. https://doi.org/10.1016/j.rse.2021.112691 
  18. Thomas, S.,Kuska, M. T., Bohnenkamp, D., Brugger,A.,Alisaac, E., Wahabzada, M., Behmann, J., and Mahlein, A.-K., 2018. Benefits of hyperspectral imaging for plant disease detection and plant protection: A technical perspective. Journal of Plant Diseases and Protection, 125, 5-20. https://doi.org/10.1007/s41348-017-0124-6 
  19. Tian, Z., Wang, J. W., Li, J., and Han, B., 2021. Designing future crops: Challenges and strategies for sustainable agriculture. The Plant Journal, 105(5), 1165-1178. https://doi.org/10.1111/tpj.15107
  20. Turner, D., Lucieer, A., McCabe, M., Parkes, S., and Clarke, I., 2017. Pushbroom hyperspectral imaging from an unmanned aircraft system (UAS) - Geometric processingworkflow and accuracy assessment. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 379-384. https://doi.org/10.5194/isprs-archives-XLII-2-W6-379-2017 
  21. Turner, D., Lucieer, A., and Wallace, L., 2014. Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Transactions on Geoscience and Remote Sensing, 52(5), 2738-2745. https://doi.org/10.1109/TGRS.2013.2265295 
  22. Wei, L., Feng, F., Li, H., and Du, Q., 2018. Discriminant analysis-based dimension reduction for hyperspectral image classification: A survey of the most recent advances and an experimental comparison of different techniques. IEEE Geoscience and Remote Sensing Magazine, 6(1), 15-34. https://doi.org/10.1109/MGRS.2018.2793873 
  23. Yu, H., Kong, B., Hou, Y., Xu, X., Chen, T., and Liu, X., 2022. A critical review on applications of hyperspectral remote sensing in crop monitoring. Experimental Agriculture, 58, e26. https://doi.org/10.1017/S0014479722000278 
  24. Zeng, C., King, D. J., Richardson, M., and Shan, B., 2017. Fusion of multispectral imagery and spectrometer data in UAV remote sensing. Remote Sensing, 9(7), 696. https://doi.org/10.3390/rs9070696