Korean J. Remote Sens. 2025; 41(1): 87-100
Published online: February 28, 2025
https://doi.org/10.7780/kjrs.2025.41.1.8
© Korean Society of Remote Sensing
Correspondence to : Chuluong Choi
E-mail: cuchoi@pknu.ac.kr
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Unmanned Aerial Vehicle (UAV) orthomosaics are widely used in various fields, including construction, environmental monitoring, and real estate, and their quality is influenced by the accuracy of interior orientation. In this study, the modes were divided into “All mode” and “Part mode” based on the interior orientation parameter settings, and lens distortion was compared between the two modes. Additionally, the effects of the number of Ground Control Points (GCPs) and their positional accuracy on the positional accuracy of UAV-based orthomosaics were evaluated for each mode. The Part mode, which applies only a subset of interior orientation parameters, exhibited greater LatLon and XY errors compared to the All mode, which applies all parameters. Using custom Python scripts, lens distortion was compared between the two modes, and the image coordinate deviations were found to be 0.160±1.347 pixels in u (image x) and 0.076±0.991 pixels in v (image y), both of which were below 2 pixels. As the number of GCPs decreased, both modes exhibited an increasing trend in GCP positional error. In terms of GCP and CP pixel errors, the All mode demonstrated a lower and more consistent error compared to the Part mode, as it was less sensitive to changes in the number of GCPs. The addition of a random offset to the GCP coordinates to vary the GCP positional accuracy showed that as the magnitude of the added offset increased, the GCP positional error exhibited a linear increase. These findings suggest that the setting of interior orientation parameters and GCP management are critical factors in determining the accuracy of UAV orthomosaics. This study is expected to provide valuable foundational data for analyzing error factors in the UAV-based orthomosaic creation process, which can be utilized in both practical and research settings.
Keywords Unmanned aerial vehicle, Orthomosaic, Ground control point, Lens distortion, Radial distortion, Tangential distortion, Positional error
Orthomosaics are an integral part of geospatial information acquisition and analysis using Unmanned Aerial Vehicles (UAVs). They are widely used in a variety of fields, including construction, environmental management, and real estate (National Geographic Information Institute, 2025), and have established themselves as indispensable tools, particularly in applications requiring highprecision spatial data.
However, during the production of orthomosaics, various factors such as interior orientation parameters, the number of Ground Control Points (GCPs), and the positional accuracy of the GCPs can introduce X, Y, and pixel errors that impact the positional accuracy of the orthomosaics. These errors not only degrade the quality of the seamlines in the orthomosaics but also negatively impact the reliability of the analysis results based on them (Harwin and Lucieer, 2012).
Previous studies have examined the influence of GCP placement patterns and flight altitude on orthomosaic accuracy (Kim et al., 2018; Kim and Hong, 2020). Additionally, research has investigated the effects of image overlap and the number of GCPs on orthomosaic quality (Yoo et al., 2016). Furthermore, previous findings indicate that the positioning of GCPs and the distance between GCPs and Check Points (CPs) significantly affect accuracy (Yun and Yoon, 2018; Lee, 2021).
One of the key factors affecting the accuracy of orthomosaics is interior orientation errors, which distort the precise correspondence between images and the actual terrain, leading to a decrease in the positional accuracy of the orthomosaic. This issue becomes more pronounced in areas with significant terrain distortion. Interior orientation errors cause misalignment between aerial images during the orthomosaic generation process, resulting in visible seamlines that degrade image quality. These visual discrepancies along the seamlines can confuse users, making the interpretation and utilization of the orthomosaic more challenging.
However, studies that systematically and quantitatively analyze the impact of interior orientation parameters on orthomosaic positional accuracy are still limited. Furthermore, research proposing efficient and optimized methodologies that can be applied in practical settings is scarce.
To address this limitation, this study analyzes error factors from three main aspects. First, to evaluate the impact of interior orientation parameters, the lens distortion in the All mode, which applies all interior orientation parameters, and the Part mode, which applies only a subset, was compared. For this analysis, a custom Python script was used. Second, to assess the effect of the number of GCPs, the number of GCPs was gradually reduced from 22 to 4, and the reduced GCPs were converted into CPs for comparison of positional error and pixel error. Third, to analyze the effect of GCP positional accuracy, random offsets ranging from 0.03 m to 1.00 m were generated in 7 steps and added to each GCP coordinate. The positional errors of the GCPs were then compared and analyzed for each mode.
This study systematically analyzes the error factors that arise during the orthomosaic generation process using UAVs, taking into account the settings of interior orientation parameters. In doing so, it aims to propose a method that improves both the precision and efficiency of orthomosaics. The findings of this study are expected to enhance the applicability of UAV imagery data and provide a foundation for improving the reliability and efficiency of orthomosaics in both practical and research settings.
This paper is structured as follows. Chapter 2 discusses the theoretical background and methodology of the study. Chapter 3 analyzes the errors based on the settings of interior orientation parameters, the number of GCPs, and GCP positional accuracy, and presents the key findings derived from this analysis. Finally, Chapter 4 discusses the conclusions and implications of the study.
The study was conducted on 11 November 2024 at the main sports field of Pukyong National University’s Yongdang Campus, located at 365, Sinseon-ro, Nam-gu, Busan, Republic of Korea. The workflow of this study is presented in Fig. 1. Two flight paths were planned using DJI Flighthub2, one in the north-south (N–S) direction and the other in the east-west (E–W) direction. The flight altitude was maintained at 70±0.04 m, with both overlap and sidelap set to 90%. The interval between shots was between 5.1 and 5.3 seconds. The total flight distances were 843.0 m for the N-S direction and 895.0 m for the E-W direction.
Each flight lasted 557 and 575 seconds, capturing 105 photos in the north-south direction and 111 photos in the east-west direction. The flight speed was maintained between 1.47 and 1.61 m/s, significantly lower than the maximum flight speed of the DJI Mavic 3 Enterprise (M3E), which is 15 m/s (Lee et al., 2018; DJI Mavic 3 Enterprise, 2024), to ensure precision photography. The study area covered 7,578.8 m2, with an input Ground Sampling Distance (GSD) of 2.00 cm/pixel and an output GSD of 2.09 cm/pixel. Detailed flight information is provided in Table 1.
Table 1 Comparison of UAV flight data and camera settings by flight path
Parameter | N-S Direction | E-W Direction |
---|---|---|
Start time | 15:33:31 | 15:45:17 |
End time | 15:42:48 | 15:54:52 |
Flight time | 00:09:17 | 00:09:35 |
Photo count | 105 | 111 |
Flight height (m) | 70 | 70 |
Number of strips | 6 | 12 |
F-stop | 3.2–4.5 | 2.8–4.0 |
Overlap (%) | 90 | 90 |
Sidelap (%) | 90 | 90 |
ISO | 100–110 | 100–110 |
Shutter speed | 1/400 | 1/400 |
Image quality (Mean±SD) | 0.855±0.023 | 0.863±0.019 |
ISO: International Standard Organization, SD: Standard Deviation.
A total of 23 grid-shaped GCPs, each measuring 50 × 50 cm, were evenly distributed within the study area. However, due to damage to GCP No. 3 during the experiment, only 22 GCPs were ultimately used. Image acquisition was carried out using the DJI M3E, as shown in Fig. 2(a), by uploading the pre-designed flight path to its controller.
The DJI M3E is a high-performance compact UAV designed for professional applications such as surveying and mapping. Equipped with a 20 MP high-resolution camera and a Real-Time Kinematic (RTK) module, it is capable of collecting spatial data with centimeter-level precision. Notably, it offers a long flight time of up to 45 minutes and a maximum transmission range of 15 km, ensuring high operational efficiency over large areas. The main specifications of the M3E are shown in Table 2.
Table 2 Specification of Mavic 3 Enterprise (M3E)
General Specification | Value | Camera Specification | Value |
---|---|---|---|
Max speed (m/s) | 15 | Focal length (mm) | 12.29 (24/35 mm) |
Max flight Time (min) | 45 | F-stop | f/2.8–f/11 |
GSD (cm) | 2.86/100 × FH(m) | ISO | 100–6400 |
Image size | 5280 × 3956 (20MP) | Shutter speed | 1/2000-8 |
Field of view (°) | 84 | Shutter type | Mechanical |
Sensor | 4/3CMOS | CCD size (mm) | 17.73 × 13.29 |
The weather conditions during image acquisition were clear and sunny, with an average temperature of 17.3°C, a maximum of 23.1°C, a minimum of 13.0°C, and an average cloud cover of 1.3, with no precipitation (Korea Meteorological Administration, 2024).
GCP coordinate surveying was performed using the iM-55 total station and GRX2 Global Navigation Satellite System (GNSS) receiver from Sokkia, as shown in Figs. 2(b, c). The main specifications of the GRX2 GNSS receiver are presented in Table 3. A prism was installed at the center of each GCP, and the distance and angles from the Base GCP were measured using a total station. To ensure the reliability of the measured data, total station surveys for GCPs No. 1 to No. 20 were conducted twice, and the average values of the measurements were utilized. In addition, the coordinates of 14 GCPs, including the Base GCP and GCP No. 0, were measured using a GNSS receiver. The Base GCP coordinates were designated as the reference for calculating GCP coordinates, and the remaining 21 GCP coordinates were determined using the distance and angle data obtained from the total station.
Table 3 Specification of GRX2
Specification | Value | ||
---|---|---|---|
Tracked Signals | GPS, GLONASS, SBAS | ||
Number of Channels | 226 | ||
Positioning Accuracy (L1+L2) | Type | Horizontal | Vertical |
Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Fast Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Kinematic | 10mm + 1 ppm | 15mm + 1 ppm | |
RTK | 10mm + 1 ppm | 15mm + 1 ppm | |
Positioning Accuracy | DGPS: < 0.5m | ||
Update/Output rate | 1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard) | ||
Physical Specifications | Size: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.) |
The GCP coordinates calculated using the data measured with the total station and GNSS receiver are presented in Table 4. The iM-55 is a high-precision surveying instrument with an angular measurement accuracy of 5 seconds and a distance measurement accuracy of (1.5 + 2 ppm × measurement distance) mm. As shown in Table 5, the survey results indicated that the relative positions of the GCPs had standard deviations of X: 2 mm, Y: 4 mm, and Z: 2 mm. The GRX2 is a 226-channel receiver capable of receiving GPS, GLONASS, and SBAS signals. The GCP coordinates surveyed using the GRX2 showed absolute position standard deviations of X: 33 mm, Y: 66 mm, and Z: 22 mm relative to the baseline.
Table 4 Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m)
No. | Total Station | GNSS | Easting (X) | Northing (Y) | Height (Z) | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|---|---|---|---|
Base | 1 | 1 | 208,213.370 | 280,231.334 | 110.427 | |||
0 | 1 | 1 | 208,171.243 | 280,161.693 | 110.419 | |||
1 | 2 | 208,186.660 | 280,131.747 | 110.345 | 0.004 | 0.003 | ||
2 | 2 | 1 | 208,167.470 | 280,143.805 | 110.382 | 0.023 | 0.019 | 0.008 |
4 | 2 | 208,134.579 | 280,164.153 | 110.383 | 0.004 | 0.002 | ||
5 | 2 | 208,153.061 | 280,193.443 | 110.360 | 0.001 | |||
6 | 2 | 1 | 208,172.131 | 280,181.181 | 110.507 | 0.013 | 0.013 | 0.013 |
7 | 2 | 1 | 208,188.867 | 280,171.596 | 110.454 | 0.006 | 0.005 | 0.009 |
8 | 2 | 208,205.180 | 280,160.774 | 110.332 | 0.003 | 0.012 | 0.003 | |
9 | 2 | 208,216.832 | 280,180.520 | 110.347 | 0.004 | 0.002 | ||
10 | 2 | 1 | 208,209.429 | 280,182.120 | 110.401 | 0.013 | 0.013 | 0.016 |
11 | 2 | 1 | 208,200.048 | 280,189.952 | 110.462 | 0.014 | 0.012 | 0.040 |
12 | 2 | 1 | 208,184.312 | 280,202.069 | 110.527 | 0.012 | 0.006 | 0.013 |
13 | 2 | 208,165.218 | 280,212.792 | 110.363 | 0.004 | |||
14 | 2 | 208,172.761 | 280,225.059 | 110.367 | 0.003 | 0.007 | 0.002 | |
15 | 2 | 1 | 208,193.366 | 280,216.145 | 110.512 | 0.015 | 0.006 | 0.009 |
16 | 2 | 1 | 208,210.515 | 280,207.497 | 110.464 | 0.004 | 0.013 | 0.035 |
17 | 2 | 1 | 208,228.174 | 280,197.557 | 110.332 | 0.004 | 0.024 | 0.034 |
18 | 2 | 208,242.606 | 280,221.518 | 110.400 | 0.006 | 0.000 | ||
19 | 2 | 1 | 208,224.028 | 280,230.563 | 110.416 | 0.008 | 0.004 | 0.022 |
20 | 2 | 1 | 208,208.466 | 280,240.887 | 110.418 | 0.022 | 0.004 | 0.028 |
21 | 2 | 1 | 208,191.048 | 280,253.412 | 110.390 | 0.023 | 0.010 | 0.031 |
Table 5 Standard deviation of GCP coordinates measured using total station and GNSS receiver
Instrument | Count | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|
Total Station (GCP) | 42 | 0.002 | 0.004 | 0.002 |
GNSS Receiver (Baseline) | 14 | 0.033 | 0.066 | 0.022 |
The captured UAV images were processed in the photogrammetry software Metashape, undergoing tie point generation and alignment procedures. The image quality was assessed using Metashape’s image quality estimation function. Image quality was assessed using Metashape’s image quality estimation function. The quality values of the images used in this study were found to be between 0.81 and 0.92, which are calculated based on the sharpness of the images. Generally, images with quality values below 0.5 are recommended for exclusion from processing (Agisoft LLC, 2025). All 216 images used in this study had quality values of 0.8 or higher, indicating high quality, with no further exclusions required.
During the image alignment process, the interior and exterior orientation parameters were calculated, and the relative positional relationships between images were optimized through bundle adjustment. Errors were then analyzed based on the interior orientation parameter settings, the number of GCPs, and the accuracy of GCP coordinates. This analysis aimed to quantitatively assess the error factors that may arise during the production of UAV-based orthomosaic and to provide foundational data for enhancing the accuracy of the results.
In UAV photogrammetry, lens distortion correction algorithms exhibit similar characteristics across different software, but they are not entirely identical. All models in Metashape assume a central projection camera and nonlinear distortions are modeled based on the Brown-Conrady lens distortion model.
The local camera coordinate system is defined with the camera’s projection center as the origin. In this system, the Z-axis points in the direction of the camera’s line of sight, the X-axis points horizontally to the right, and the Y-axis points vertically downward. Conversely, the image coordinate system uses the pixel center at the top-left corner of the image frame as its origin, with coordinates expressed in pixel units. These coordinate system definitions are essential for accurately modeling the camera’s interior orientation parameters and distortion corrections.
In this study, after generating tie points, the internal orientation parameters were calculated using the Agisoft Metashape software, and the interior orientation simulation was performed using a custom Python script. Interior orientation refers to the process of defining and calibrating the internal parameters of the optical system (camera), which is a critical step for establishing the precise relationship between digital images and real-world space. The interior orientation process involves determining the internal parameters of the optical system and correcting distortions. These parameters include the focal length (f), principal point offset (cx, cy), radial distortion coefficients (k1, k2, k3, k4), tangential distortion coefficients (p1, p2), and skew coefficients (affinity and non-orthogonality: b1, b2).
The Brown–Conrady model allows for the correction of both radial and tangential distortions (Kang et al., 2008). Radial distortion refers to the distortion that occurs radially outward from the lens center, while tangential distortion refers to distortion occurring in the tangential direction, perpendicular to the radial lines from the lens center (Kang et al., 2009). Generally, tangential distortion is significantly smaller compared to radial distortion (Beauchemin and Bajcsy, 2001). Accurate camera calibration is essential for transforming image coordinates into physical 3D spatial coordinates. Interior orientation parameters provide the foundational data required for exterior orientation, facilitating precise alignment and positional calculations between images.
Eq. (1) is a formula used to calculate the radius (r) from the principal point for each pixel. The radius (r) serves as a variable in the radial and tangential distortion correction equations, Eqs. (2) and (3).
Where r2, r4, r6, and r8 correspond to the squared, fourth, sixth, and eighth powers of the radius, respectively, and represent the distortion correction terms for each order. Typically, only k1 and k2 are used for radial distortion correction; however, depending on the degree of distortion, k3 and k4 may also be applied. The distortion-corrected coordinates (x′, y′) are transformed from the camera image plane to the final projected point coordinates in the image coordinate system (in pixels: u, v). The conversion formulas, shown in Equations (4) and (5), account for distortions caused by irregular lens rotation (b1, b2) or principal point offset (cx, cy).
Descriptions of the variables used in the equations are provided in Table 6. Metashape utilizes the precise coordinates of GCPs to transform camera position and rotation information into the ground coordinate system. The positional error is calculated based on the differences between the actual coordinates of the GCPs and CPs (Xmeasured, i), Ymeasured, i, Zmeasured, i) and the predicted coordinates (Xpredicted, i), Ypredicted, i, Zpredicted, i). Eq. (6) represents the Root Mean Square Error (RMSE) formula used to calculate the positional error for GCPs and CPs, where n denotes the total number of GCPs and CPs.
Table 6 Description of the camera calibration parameters
Parameter | Description | Parameter | Description |
---|---|---|---|
X, Y, Z | Point coordinates in the local camera coordinate system | w, h | Image width and height (in pixels) |
x, y | X/Z, Y/Z | x’, y’ | Divide Focal length corrected lens distortion |
X0, Y0, Z0 | Camera center coordinates | rij | Rotation matrix elements (ω, ø, κ) |
Metashape fundamentally calculates the camera’s exterior orientation parameters based on the collinearity equation. The collinearity condition states that a point on the ground, its corresponding point on the image, and the camera’s projection center must all lie in the same straight line. Eqs. (7) and (8) define the relationship between image coordinates and ground coordinates based on the interior and exterior orientation parameters (Kim et al., 2004).
Bundle adjustment is an optimization algorithm used in photogrammetry to simultaneously adjust camera parameters (both interior and exterior) and GCP coordinates to achieve optimal results (Moore et al., 2009; Triggs et al., 2000). This algorithm minimizes errors by utilizing multiple images. Optimization is performed using the nonlinear least squares method, specifically the Levenberg-Marquardt algorithm, based on image feature matching data that includes ground control points (Levenberg, 1944). The bundle adjustment equation is presented in Equation (9), where measuredij represents the observed image coordinates, and projectedij represents the image coordinates estimated by the model.
The amount of lens distortion varies depending on the field of view. Especially when wide angle lenses are used for UAV aerial imaging, lens distortion significantly affects the accuracy of the image, making the proper configuration of interior orientation parameters essential (Alemán-Flores et al., 2013). In this study, experiments were conducted in two modes, All mode and Part mode, to analyze the errors due to different interior orientation parameter settings.
The All mode optimizes all interior orientation parameters to achieve the highest possible accuracy. In contrast, the Part mode selectively optimizes only certain parameters, excluding k4, b1, and b2. This mode corresponds to the default setting in UAV photogrammetry software such as Agisoft Metashape and Pix4D Mapper, aiming to enhance data processing efficiency while focusing only on essential parameters for modeling. The interior orientation parameter values for each mode are presented in Table 7.
Table 7 Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes
Parameter | All | Part | |||
---|---|---|---|---|---|
Value | Error | Value | Error | ||
Focal length | f | 3702.09875 | 0.062302 | 3705.2321 | 0.06365 |
Principal point offset | cx | 27.0052 | 0.01989 | 26.876 | 0.017711 |
cy | -3.63647 | 0.020407 | -3.71158 | 0.017975 | |
Skew coefficients | b1 | 0.0640683 | 0.00229 | ||
b2 | -0.178395 | 0.0023 | |||
Radial distortion coefficients | k1 | -0.0751382 | 0.0000267 | -0.0932475 | 0.0000165 |
k2 | -0.156228 | 0.000125 | -0.0522055 | 0.0000538 | |
k3 | 0.259075 | 0.00023 | 0.0368672 | 0.0000559 | |
k4 | -0.156688 | 0.000142 | |||
Tangential distortion coefficients | p1 | 0.0000219464 | 0.000000744 | 8.53919e-06 | 0.000000656 |
p2 | -0.0000554343 | 0.000000712 | -0.0000562457 | 0.000000622 | |
Lat Lon | 0.051 | 0.078 | |||
XY | 0.035 | 0.051 |
Previous studies have analyzed RMSE values under different interior orientation parameter settings (Nho et al., 2020). Comparing Case 1 (Exterior Orientation Parameter [EOP], f, cx, cy, k1, k2) and Case 2 (EOP, f, cx, cy) showed that Case 2 had a lower RMSE by 0.077 m on the x-axis and 0.063 m on the y-axis. These results suggest that excluding radial distortion coefficients may improve accuracy. However, this effect is primarily observed in cameras with minimal radial distortion or when in-camera distortion correction is applied. Furthermore, the difference was not statistically significant.
Based on this, this study conducted a more detailed comparison of errors according to interior orientation parameter settings. In All mode, the Lat Lon error was 0.051m and the XY error was 0.035m, showing a low level of error. In contrast, the Part mode showed higher errors, with a Lat Lon error of 0.078 m and an XY error of 0.051 m. These results indicate that the range of interior orientation parameter settings affects the accuracy of the data.
To analyze the differences between All mode and Part mode in more detail, a custom Lens Distortion Simulation Python script was used to calculate the radial distortion, tangential distortion, and total lens distortion for each mode. These values were calculated based on the interior orientation parameters. The calculated results are presented in Table 8, while the differences in lens distortion and final coordinates (u, v) between the two modes are shown in Table 9.
Table 8 Comparison of lens, radial, and tangential distortion between All mode and Part mode
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | ||
---|---|---|---|---|---|---|---|---|---|---|
All | Min | 0.000 | 0.000 | -0.026 | -0.342 | 172.335 | 53.934 | 0.000 | 0.000 | 0.000 |
Max | 239.558 | 179.487 | 0.304 | 0.000 | 5161.923 | 3894.441 | 299.787 | 299.339 | 0.457 | |
Average | 50.633 | 33.591 | 0.050 | -0.096 | 2667.055 | 1974.268 | 64.064 | 64.064 | 0.115 | |
Standard deviation | 53.289 | 34.878 | 0.066 | 0.066 | 1475.948 | 1113.929 | 60.367 | 60.367 | 0.084 | |
Part | Min | 0.000 | 0.000 | -0.101 | -0.308 | 174.684 | 55.373 | 0.000 | 0.000 | 0.000 |
Max | 233.251 | 174.761 | 0.216 | 0.000 | 5159.165 | 3892.848 | 291.815 | 291.457 | 0.376 | |
Average | 51.708 | 34.400 | 0.020 | -0.097 | 2666.896 | 1974.191 | 65.519 | 65.518 | 0.108 | |
Standard deviation | 53.734 | 35.127 | 0.056 | 0.064 | 1474.696 | 1113.006 | 60.710 | 60.710 | 0.073 |
Table 9 Differences in lens, radial, and tangential distortion between All mode and Part mode
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | |
---|---|---|---|---|---|---|---|---|---|
Min | –2.704 | –2.025 | 0.000 | –0.034 | –6.131 | –4.685 | –2.958 | –2.875 | –0.061 |
Max | 6.307 | 4.726 | 0.092 | 0.042 | 6.574 | 4.843 | 7.980 | 7.881 | 0.081 |
Average | –1.075 | –0.808 | 0.031 | 0.001 | 0.160 | 0.076 | –1.455 | –1.455 | 0.008 |
Standard deviation | 0.769 | 0.574 | 0.024 | 0.013 | 1.347 | 0.991 | 0.783 | 0.782 | 0.026 |
The equations for calculating radial and tangential distortion in the X and Y directions are presented in Eqs. (2) and (3). The average values of Xradial and Yradial were 50.633 and 33.591, respectively, in All mode, while in Part mode, they were 51.708 and 34.400 showing slightly higher values than in All mode.
The standard deviations were ±53.289, ±34.878 in All mode and ±53.734, ±35.127 in Part mode, indicating slightly greater variability in Part mode. This suggests that lens distortion correction in the Part mode may be somewhat less consistent.
These results indicate that setting the radial distortion coefficient affects both the lens distortion correction and its variability. Both modes exhibited an increasing radial distortion trend with increasing distance from the center, suggesting that the configuration of the interior orientation parameters has little effect on the overall radial distortion pattern.
In the All mode, the mean values of Xtangential and Ytangential were 0.050 and –0.096, respectively, while in the Part mode, they were 0.020 and –0.097. In both modes, tangential distortion was significantly smaller than radial distortion.
The skew coefficient is a variable that adjusts the correlation of the slopes between coordinates and is used to calculate the image coordinate u. It was, therefore, expected that the skew coefficient would have a significant effect on the image coordinates. The results of this study indicate that its effect is minimal. The mean values and standard deviations of the image coordinates (u, v) in the All mode were slightly higher compared to the Part mode. However, this difference is not statistically or practically significant, and it is judged that omitting the skew coefficient would not have a substantial impact on the results when the camera’s tilt is not large. As shown in Table 9, the difference between the All mode and the Part mode was 0.160±1.347 pixels for the u coordinate and 0.076±0.991 pixels for the v coordinate, both of which were at a very small level of less than 2 pixels. This suggests that the skew factor has a very limited effect on the calculation of the image coordinates.
Lens distortion includes both radial and tangential distortion, and the average lens distortion values for the All mode and the Part mode were 64.064 and 65.519, respectively, indicating similar levels. However, in terms of maximum values, the All mode showed 299.787, while the Part mode showed 291.815, a difference of 7.972 pixels. Converted to GSD 2.09 cm/pixel, the maximum error in Part mode is approximately 16.66 cm. However, this error only occurs in a few pixels at the four corners of the image, and most pixels have an error of less than 3 pixels.
The distribution of lens distortion and the difference between the two modes in the All mode and the Part mode are shown in Fig. 3. The central part of the difference graph shows values close to 0, indicating that there is almost no difference in lens distortion between the two modes in the central region. In contrast, a significant difference was observed in the blue region at the edges. Assuming that the lens distortion correction in the All mode is accurate, it is likely that the Part mode failed to correct the lens distortion at the edges, resulting in errors of more than 7 pixels.
In this study, to analyze the effect of the number of GCPs on the accuracy of orthomosaic generation, the experiment was conducted by gradually reducing the number of GCPs from 22 to 4. The final four retained GCPs were placed at the outer corners of the playground. This arrangement was designed based on a previous study, which suggested that placing 3 to 4 GCPs outside the target area and adding one at the center to form a centric polygonal network configuration is an effective strategy for improving accuracy (Kim et al., 2018). Additionally, this arrangement was made with the intention of maintaining the accuracy of the orthomosaic’s outer region while using the minimum number of GCPs. The final four retained GCPs are Nos. 1, 4, 18, and 21, as shown in Fig. 4.
According to previous studies, as the number of GCPs increases, positional accuracy improves (Shylesh et al., 2023). In particular, the number of GCPs has been reported to have a greater impact on vertical positional accuracy than on planar positional accuracy (Yun and Sung, 2018).
In this study, the error analysis was performed by gradually reducing the number of GCPs and converting the reduced GCPs into CPs. The analysis of positional and pixel errors according to the number of GCPs and CPs was performed in both the All and Part modes, and the results are shown in Fig. 5. In Fig. 5(a), an increasing trend in error was observed in both All mode and Part mode as the number of GCPs decreased. In All mode, the positional error of GCPs increased gradually, whereas Part mode exhibited slight variations. However, regardless of the number of GCPs, the positional error in All mode remained consistently lower than in Part mode, with an average difference of approximately 0.015 m.
In Fig. 5(b), Part mode exhibits a clear pattern of rapidly increasing pixel error as the number of GCPs decreases. In contrast, in All mode, the pixel error remains relatively constant, with a more gradual increase in error. This indicates that Part mode, which has limited interior orientation parameters, is more sensitive to changes in the number of GCPs. The pixel error difference between All mode and Part mode averaged approximately 2.746 pixels.
In Fig. 5(c), the positional error of CPs shows significant variability in Part mode. While All mode also exhibited an increasing trend in error, the magnitude of the increase was notably smaller compared to Part mode. The CP positional error between All mode and Part mode ranged from 0.13 to 0.30 m, with an average difference of 0.22 m.
In Fig. 5(d), Part mode shows a tendency for CP pixel error to increase sharply as the number of GCPs decreases. In particular, the CP pixel error continued to increase significantly until the number of GCPs decreased to 15. However, when the number of GCPs decreased to 14 or fewer, a decreasing trend in error was observed. In contrast, in All mode, the CP pixel error remained relatively constant, showing a stable pattern that was not highly sensitive to changes in the number of GCPs. The difference in CP pixel error between All mode and Part mode averaged approximately 2.330 pixels.
In all the graphs, as the number of GCPs decreased, the increase in error in Part mode became more pronounced. This indicates that a reduction in the number of GCPs has a negative impact on orthomosaic accuracy. However, in All mode, the error either increased gradually or remained consistent, showing relatively stable accuracy. This suggests that All mode is less sensitive to changes in the number of GCPs compared to Part mode and can maintain stable accuracy even with fewer GCPs.
Previous studies have shown that the quality of the GCP affects the positional accuracy of the orthomosaic (Lee et al., 2020). Furthermore, other studies have reported that RMSE values in the X, Y, and Z axes are lower when using high-accuracy GCPs compared to low-accuracy GCPs (Shylesh et al., 2023). These findings suggest that the accuracy of GCPs plays a critical role in determining the positional accuracy of UAV images.
Based on these research findings, this study analyzed the impact of GCP positional accuracy on UAV-based orthomosaic generation. To this end, the positional errors were compared between the precise coordinates of GCPs and the inaccurate coordinates generated by adding random offsets. The random offsets were set at seven levels: 0.03 m, 0.05 m, 0.07 m, 0.10 m, 0.20 m, 0.50 m, and 1.00 m. These values were randomly generated within a range of –n to +n (where n represents the offset level) using Kutools in Excel.
The analysis results showed that as the range of random offsets added to the GCP coordinates increased, the positional error of the GCPs showed a linear increase. This trend is clearly shown in the graph in Fig. 6. In particular, the R2 values for the All mode and the Part mode were 0.9995 and 0.9988, respectively, demonstrating a high degree of linearity close to 0.999.
The regression equation for the All mode was calculated as y = 1.0653x, and for the Part mode, it was y = 1.072x. The slope difference between the two modes was 0.0067, which is very small, indicating almost no difference in GCP errors and showing similar trends. Additionally, it was observed that as the positional accuracy of the GCPs increased, the total error decreased, highlighting that precise management of GCP coordinates is essential for improving orthomosaic accuracy.
The directional differences in GCP positional errors between the All mode and the Part mode are shown in Table 10. The error differences in both the East and North directions were all below 0.003 m and remained very small. In the North direction, the error was particularly close to 0, indicating that the differences between the two modes were minimal. In the altitude direction, a maximum difference of 0.020m was observed, but this difference is considered negligible in UAV-based data analysis.
Table 10 Difference of GCP positional errors for All and Part mode
East err | North err | Alt err | Error (m) | Error (pix) | |
---|---|---|---|---|---|
Original | 0.003 | 0.001 | 0.020 | 0.016 | 2.616 |
0.03 | 0.001 | - | 0.018 | 0.014 | 2.616 |
0.05 | 0.002 | - | 0.002 | 0.001 | 2.616 |
0.07 | 0.002 | 0.002 | 0.016 | 0.012 | 2.616 |
0.10 | –0.001 | - | –0.003 | –0.003 | 2.616 |
0.20 | 0.001 | - | 0.015 | 0.008 | 2.615 |
0.50 | 0.002 | 0.001 | 0.008 | 0.006 | 2.611 |
1.00 | 0.002 | 0.001 | 0.001 | 0.002 | 2.616 |
The difference in total error was very low, with a maximum value of 0.016m, while the pixel error difference remained relatively constant at approximately 2.616 pixels in most cases. In all cases, the pixel error remained within 3 pixels, demonstrating stable accuracy.
In particular, the interior orientation parameter settings were found to have minimal impact on GCP positional errors, suggesting that in situations requiring rapid processing, omitting certain interior orientation parameters can still maintain a sufficient level of accuracy. This provides a significant foundation for generating reliable orthophotos even when processing time is reduced. Therefore, ensuring the accuracy of GCP coordinates is a crucial factor in UAV-based spatial data analysis and applications, and it is expected to enable more precise data-driven decisionmaking and applications.
In this study, modes were distinguished based on the interior orientation parameter settings to analyze lens distortion, and the impact of the number of GCPs and their positional accuracy on the accuracy of UAV orthophotos was evaluated.
The results showed that the All mode exhibited a low error, with Lat-Lon and XY errors of 0.051 m and 0.035 m, respectively, whereas the Part mode showed relatively higher errors of 0.078 m for Lat-Lon and 0.051 m for XY. This suggests that the settings of interior orientation parameters can influence the positional accuracy of orthoimages. The mean and standard deviation of Xradial and Yradial were higher in the Part mode than in the All mode, indicating greater variability in the Part mode. Additionally, the difference in Xtangential and Ytangential between the two modes was minimal, within 0.031. In both modes, the radial distortion was significantly greater than the tangential distortion. The difference between the All and Part modes in image coordinates was 0.160±1.347 pixels in the u coordinate and 0.076±0.991 pixels in the v coordinate, both showing a small discrepancy of less than two pixels.
To analyze the performance differences between the two modes, experiments were conducted by adjusting the number and positional accuracy of the GCPs. As the number of GCPs decreased, both the positional and pixel errors of the GCPs increased. The pixel error difference of GCPs between the All and Part modes averaged 2.746 pixels, while the pixel error difference of CPs averaged 2.330 pixels. This was similar to the difference in image coordinates previously calculated between the two modes.
As the positional accuracy of the GCP coordinates decreased, the GCP positional errors showed a tendency to increase linearly. The R2 values for the All and Part modes were 0.9995 and 0.9988, respectively, indicating a high degree of linearity, and the difference in slopes between the two modes was very small at 0.0067, resulting in nearly no difference in GCP errors. When comparing the positional errors of the GCPs by direction, except the case where a random offset of 0.10m level was added, the positional errors in the All mode were smaller than those in the Part mode in all other cases. The difference in pixel errors between the two modes averaged 2.615 pixels, which is similar to the previously calculated difference in image coordinates between the two modes.
This study comprehensively analyzed the settings of interior orientation parameters, the number of GCPs, and the positional accuracy of GCPs, systematically evaluating the impact of the interactions between these factors on the quality of UAV-based orthomosaics. When setting interior orientation parameters, it is essential to carefully select between All mode and Part mode based on the number of GCPs, project accuracy requirements, and processing time constraints. The results demonstrated that these appropriate interior orientation parameter settings, along with precise management of GCPs, play a crucial role in improving the positional accuracy of UAV orthomosaics. This approach distinguishes itself from previous studies, which primarily focused on the analysis of individual factors. This research is expected to serve as a valuable reference for future UAV-based surveying and remote sensing applications.
This study was conducted under a specific experimental environment (flight altitude of 70 m, DJI M3E sensor, and a main sports field), which may limit the generalization of the results to various sensors and environments. Future studies could expand experiments by considering factors such as aperture size, scene types (e.g., urban, forest, marine), and Instantaneous Field of View (IFOV). Additionally, since this study was conducted in a relatively flat area, the impact of GCP elevation differences was considered negligible. However, in areas with significant terrain variations, further investigation into the effects of GCP elevation changes on orthomosaic accuracy would be valuable.
This work was supported by a Research Grant of Pukyong National University (2023).
No potential conflict of interest relevant to this article was reported.
Korean J. Remote Sens. 2025; 41(1): 87-100
Published online February 28, 2025 https://doi.org/10.7780/kjrs.2025.41.1.8
Copyright © Korean Society of Remote Sensing.
Chansol Kim1 , Seungchan Lim1, Donggyu Kim2, Hohyun Jeong3
, Chuluong Choi4*
1Master Student, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
2Undergraduate Student, Major of Geomatics Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
3Senior Researcher, Spatial Information Research Institute, LX, Wanju, Republic of Korea
4Professor, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
Correspondence to:Chuluong Choi
E-mail: cuchoi@pknu.ac.kr
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Unmanned Aerial Vehicle (UAV) orthomosaics are widely used in various fields, including construction, environmental monitoring, and real estate, and their quality is influenced by the accuracy of interior orientation. In this study, the modes were divided into “All mode” and “Part mode” based on the interior orientation parameter settings, and lens distortion was compared between the two modes. Additionally, the effects of the number of Ground Control Points (GCPs) and their positional accuracy on the positional accuracy of UAV-based orthomosaics were evaluated for each mode. The Part mode, which applies only a subset of interior orientation parameters, exhibited greater LatLon and XY errors compared to the All mode, which applies all parameters. Using custom Python scripts, lens distortion was compared between the two modes, and the image coordinate deviations were found to be 0.160±1.347 pixels in u (image x) and 0.076±0.991 pixels in v (image y), both of which were below 2 pixels. As the number of GCPs decreased, both modes exhibited an increasing trend in GCP positional error. In terms of GCP and CP pixel errors, the All mode demonstrated a lower and more consistent error compared to the Part mode, as it was less sensitive to changes in the number of GCPs. The addition of a random offset to the GCP coordinates to vary the GCP positional accuracy showed that as the magnitude of the added offset increased, the GCP positional error exhibited a linear increase. These findings suggest that the setting of interior orientation parameters and GCP management are critical factors in determining the accuracy of UAV orthomosaics. This study is expected to provide valuable foundational data for analyzing error factors in the UAV-based orthomosaic creation process, which can be utilized in both practical and research settings.
Keywords: Unmanned aerial vehicle, Orthomosaic, Ground control point, Lens distortion, Radial distortion, Tangential distortion, Positional error
Orthomosaics are an integral part of geospatial information acquisition and analysis using Unmanned Aerial Vehicles (UAVs). They are widely used in a variety of fields, including construction, environmental management, and real estate (National Geographic Information Institute, 2025), and have established themselves as indispensable tools, particularly in applications requiring highprecision spatial data.
However, during the production of orthomosaics, various factors such as interior orientation parameters, the number of Ground Control Points (GCPs), and the positional accuracy of the GCPs can introduce X, Y, and pixel errors that impact the positional accuracy of the orthomosaics. These errors not only degrade the quality of the seamlines in the orthomosaics but also negatively impact the reliability of the analysis results based on them (Harwin and Lucieer, 2012).
Previous studies have examined the influence of GCP placement patterns and flight altitude on orthomosaic accuracy (Kim et al., 2018; Kim and Hong, 2020). Additionally, research has investigated the effects of image overlap and the number of GCPs on orthomosaic quality (Yoo et al., 2016). Furthermore, previous findings indicate that the positioning of GCPs and the distance between GCPs and Check Points (CPs) significantly affect accuracy (Yun and Yoon, 2018; Lee, 2021).
One of the key factors affecting the accuracy of orthomosaics is interior orientation errors, which distort the precise correspondence between images and the actual terrain, leading to a decrease in the positional accuracy of the orthomosaic. This issue becomes more pronounced in areas with significant terrain distortion. Interior orientation errors cause misalignment between aerial images during the orthomosaic generation process, resulting in visible seamlines that degrade image quality. These visual discrepancies along the seamlines can confuse users, making the interpretation and utilization of the orthomosaic more challenging.
However, studies that systematically and quantitatively analyze the impact of interior orientation parameters on orthomosaic positional accuracy are still limited. Furthermore, research proposing efficient and optimized methodologies that can be applied in practical settings is scarce.
To address this limitation, this study analyzes error factors from three main aspects. First, to evaluate the impact of interior orientation parameters, the lens distortion in the All mode, which applies all interior orientation parameters, and the Part mode, which applies only a subset, was compared. For this analysis, a custom Python script was used. Second, to assess the effect of the number of GCPs, the number of GCPs was gradually reduced from 22 to 4, and the reduced GCPs were converted into CPs for comparison of positional error and pixel error. Third, to analyze the effect of GCP positional accuracy, random offsets ranging from 0.03 m to 1.00 m were generated in 7 steps and added to each GCP coordinate. The positional errors of the GCPs were then compared and analyzed for each mode.
This study systematically analyzes the error factors that arise during the orthomosaic generation process using UAVs, taking into account the settings of interior orientation parameters. In doing so, it aims to propose a method that improves both the precision and efficiency of orthomosaics. The findings of this study are expected to enhance the applicability of UAV imagery data and provide a foundation for improving the reliability and efficiency of orthomosaics in both practical and research settings.
This paper is structured as follows. Chapter 2 discusses the theoretical background and methodology of the study. Chapter 3 analyzes the errors based on the settings of interior orientation parameters, the number of GCPs, and GCP positional accuracy, and presents the key findings derived from this analysis. Finally, Chapter 4 discusses the conclusions and implications of the study.
The study was conducted on 11 November 2024 at the main sports field of Pukyong National University’s Yongdang Campus, located at 365, Sinseon-ro, Nam-gu, Busan, Republic of Korea. The workflow of this study is presented in Fig. 1. Two flight paths were planned using DJI Flighthub2, one in the north-south (N–S) direction and the other in the east-west (E–W) direction. The flight altitude was maintained at 70±0.04 m, with both overlap and sidelap set to 90%. The interval between shots was between 5.1 and 5.3 seconds. The total flight distances were 843.0 m for the N-S direction and 895.0 m for the E-W direction.
Each flight lasted 557 and 575 seconds, capturing 105 photos in the north-south direction and 111 photos in the east-west direction. The flight speed was maintained between 1.47 and 1.61 m/s, significantly lower than the maximum flight speed of the DJI Mavic 3 Enterprise (M3E), which is 15 m/s (Lee et al., 2018; DJI Mavic 3 Enterprise, 2024), to ensure precision photography. The study area covered 7,578.8 m2, with an input Ground Sampling Distance (GSD) of 2.00 cm/pixel and an output GSD of 2.09 cm/pixel. Detailed flight information is provided in Table 1.
Table 1 . Comparison of UAV flight data and camera settings by flight path.
Parameter | N-S Direction | E-W Direction |
---|---|---|
Start time | 15:33:31 | 15:45:17 |
End time | 15:42:48 | 15:54:52 |
Flight time | 00:09:17 | 00:09:35 |
Photo count | 105 | 111 |
Flight height (m) | 70 | 70 |
Number of strips | 6 | 12 |
F-stop | 3.2–4.5 | 2.8–4.0 |
Overlap (%) | 90 | 90 |
Sidelap (%) | 90 | 90 |
ISO | 100–110 | 100–110 |
Shutter speed | 1/400 | 1/400 |
Image quality (Mean±SD) | 0.855±0.023 | 0.863±0.019 |
ISO: International Standard Organization, SD: Standard Deviation..
A total of 23 grid-shaped GCPs, each measuring 50 × 50 cm, were evenly distributed within the study area. However, due to damage to GCP No. 3 during the experiment, only 22 GCPs were ultimately used. Image acquisition was carried out using the DJI M3E, as shown in Fig. 2(a), by uploading the pre-designed flight path to its controller.
The DJI M3E is a high-performance compact UAV designed for professional applications such as surveying and mapping. Equipped with a 20 MP high-resolution camera and a Real-Time Kinematic (RTK) module, it is capable of collecting spatial data with centimeter-level precision. Notably, it offers a long flight time of up to 45 minutes and a maximum transmission range of 15 km, ensuring high operational efficiency over large areas. The main specifications of the M3E are shown in Table 2.
Table 2 . Specification of Mavic 3 Enterprise (M3E).
General Specification | Value | Camera Specification | Value |
---|---|---|---|
Max speed (m/s) | 15 | Focal length (mm) | 12.29 (24/35 mm) |
Max flight Time (min) | 45 | F-stop | f/2.8–f/11 |
GSD (cm) | 2.86/100 × FH(m) | ISO | 100–6400 |
Image size | 5280 × 3956 (20MP) | Shutter speed | 1/2000-8 |
Field of view (°) | 84 | Shutter type | Mechanical |
Sensor | 4/3CMOS | CCD size (mm) | 17.73 × 13.29 |
The weather conditions during image acquisition were clear and sunny, with an average temperature of 17.3°C, a maximum of 23.1°C, a minimum of 13.0°C, and an average cloud cover of 1.3, with no precipitation (Korea Meteorological Administration, 2024).
GCP coordinate surveying was performed using the iM-55 total station and GRX2 Global Navigation Satellite System (GNSS) receiver from Sokkia, as shown in Figs. 2(b, c). The main specifications of the GRX2 GNSS receiver are presented in Table 3. A prism was installed at the center of each GCP, and the distance and angles from the Base GCP were measured using a total station. To ensure the reliability of the measured data, total station surveys for GCPs No. 1 to No. 20 were conducted twice, and the average values of the measurements were utilized. In addition, the coordinates of 14 GCPs, including the Base GCP and GCP No. 0, were measured using a GNSS receiver. The Base GCP coordinates were designated as the reference for calculating GCP coordinates, and the remaining 21 GCP coordinates were determined using the distance and angle data obtained from the total station.
Table 3 . Specification of GRX2.
Specification | Value | ||
---|---|---|---|
Tracked Signals | GPS, GLONASS, SBAS | ||
Number of Channels | 226 | ||
Positioning Accuracy (L1+L2) | Type | Horizontal | Vertical |
Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Fast Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Kinematic | 10mm + 1 ppm | 15mm + 1 ppm | |
RTK | 10mm + 1 ppm | 15mm + 1 ppm | |
Positioning Accuracy | DGPS: < 0.5m | ||
Update/Output rate | 1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard) | ||
Physical Specifications | Size: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.) |
The GCP coordinates calculated using the data measured with the total station and GNSS receiver are presented in Table 4. The iM-55 is a high-precision surveying instrument with an angular measurement accuracy of 5 seconds and a distance measurement accuracy of (1.5 + 2 ppm × measurement distance) mm. As shown in Table 5, the survey results indicated that the relative positions of the GCPs had standard deviations of X: 2 mm, Y: 4 mm, and Z: 2 mm. The GRX2 is a 226-channel receiver capable of receiving GPS, GLONASS, and SBAS signals. The GCP coordinates surveyed using the GRX2 showed absolute position standard deviations of X: 33 mm, Y: 66 mm, and Z: 22 mm relative to the baseline.
Table 4 . Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m).
No. | Total Station | GNSS | Easting (X) | Northing (Y) | Height (Z) | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|---|---|---|---|
Base | 1 | 1 | 208,213.370 | 280,231.334 | 110.427 | |||
0 | 1 | 1 | 208,171.243 | 280,161.693 | 110.419 | |||
1 | 2 | 208,186.660 | 280,131.747 | 110.345 | 0.004 | 0.003 | ||
2 | 2 | 1 | 208,167.470 | 280,143.805 | 110.382 | 0.023 | 0.019 | 0.008 |
4 | 2 | 208,134.579 | 280,164.153 | 110.383 | 0.004 | 0.002 | ||
5 | 2 | 208,153.061 | 280,193.443 | 110.360 | 0.001 | |||
6 | 2 | 1 | 208,172.131 | 280,181.181 | 110.507 | 0.013 | 0.013 | 0.013 |
7 | 2 | 1 | 208,188.867 | 280,171.596 | 110.454 | 0.006 | 0.005 | 0.009 |
8 | 2 | 208,205.180 | 280,160.774 | 110.332 | 0.003 | 0.012 | 0.003 | |
9 | 2 | 208,216.832 | 280,180.520 | 110.347 | 0.004 | 0.002 | ||
10 | 2 | 1 | 208,209.429 | 280,182.120 | 110.401 | 0.013 | 0.013 | 0.016 |
11 | 2 | 1 | 208,200.048 | 280,189.952 | 110.462 | 0.014 | 0.012 | 0.040 |
12 | 2 | 1 | 208,184.312 | 280,202.069 | 110.527 | 0.012 | 0.006 | 0.013 |
13 | 2 | 208,165.218 | 280,212.792 | 110.363 | 0.004 | |||
14 | 2 | 208,172.761 | 280,225.059 | 110.367 | 0.003 | 0.007 | 0.002 | |
15 | 2 | 1 | 208,193.366 | 280,216.145 | 110.512 | 0.015 | 0.006 | 0.009 |
16 | 2 | 1 | 208,210.515 | 280,207.497 | 110.464 | 0.004 | 0.013 | 0.035 |
17 | 2 | 1 | 208,228.174 | 280,197.557 | 110.332 | 0.004 | 0.024 | 0.034 |
18 | 2 | 208,242.606 | 280,221.518 | 110.400 | 0.006 | 0.000 | ||
19 | 2 | 1 | 208,224.028 | 280,230.563 | 110.416 | 0.008 | 0.004 | 0.022 |
20 | 2 | 1 | 208,208.466 | 280,240.887 | 110.418 | 0.022 | 0.004 | 0.028 |
21 | 2 | 1 | 208,191.048 | 280,253.412 | 110.390 | 0.023 | 0.010 | 0.031 |
Table 5 . Standard deviation of GCP coordinates measured using total station and GNSS receiver.
Instrument | Count | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|
Total Station (GCP) | 42 | 0.002 | 0.004 | 0.002 |
GNSS Receiver (Baseline) | 14 | 0.033 | 0.066 | 0.022 |
The captured UAV images were processed in the photogrammetry software Metashape, undergoing tie point generation and alignment procedures. The image quality was assessed using Metashape’s image quality estimation function. Image quality was assessed using Metashape’s image quality estimation function. The quality values of the images used in this study were found to be between 0.81 and 0.92, which are calculated based on the sharpness of the images. Generally, images with quality values below 0.5 are recommended for exclusion from processing (Agisoft LLC, 2025). All 216 images used in this study had quality values of 0.8 or higher, indicating high quality, with no further exclusions required.
During the image alignment process, the interior and exterior orientation parameters were calculated, and the relative positional relationships between images were optimized through bundle adjustment. Errors were then analyzed based on the interior orientation parameter settings, the number of GCPs, and the accuracy of GCP coordinates. This analysis aimed to quantitatively assess the error factors that may arise during the production of UAV-based orthomosaic and to provide foundational data for enhancing the accuracy of the results.
In UAV photogrammetry, lens distortion correction algorithms exhibit similar characteristics across different software, but they are not entirely identical. All models in Metashape assume a central projection camera and nonlinear distortions are modeled based on the Brown-Conrady lens distortion model.
The local camera coordinate system is defined with the camera’s projection center as the origin. In this system, the Z-axis points in the direction of the camera’s line of sight, the X-axis points horizontally to the right, and the Y-axis points vertically downward. Conversely, the image coordinate system uses the pixel center at the top-left corner of the image frame as its origin, with coordinates expressed in pixel units. These coordinate system definitions are essential for accurately modeling the camera’s interior orientation parameters and distortion corrections.
In this study, after generating tie points, the internal orientation parameters were calculated using the Agisoft Metashape software, and the interior orientation simulation was performed using a custom Python script. Interior orientation refers to the process of defining and calibrating the internal parameters of the optical system (camera), which is a critical step for establishing the precise relationship between digital images and real-world space. The interior orientation process involves determining the internal parameters of the optical system and correcting distortions. These parameters include the focal length (f), principal point offset (cx, cy), radial distortion coefficients (k1, k2, k3, k4), tangential distortion coefficients (p1, p2), and skew coefficients (affinity and non-orthogonality: b1, b2).
The Brown–Conrady model allows for the correction of both radial and tangential distortions (Kang et al., 2008). Radial distortion refers to the distortion that occurs radially outward from the lens center, while tangential distortion refers to distortion occurring in the tangential direction, perpendicular to the radial lines from the lens center (Kang et al., 2009). Generally, tangential distortion is significantly smaller compared to radial distortion (Beauchemin and Bajcsy, 2001). Accurate camera calibration is essential for transforming image coordinates into physical 3D spatial coordinates. Interior orientation parameters provide the foundational data required for exterior orientation, facilitating precise alignment and positional calculations between images.
Eq. (1) is a formula used to calculate the radius (r) from the principal point for each pixel. The radius (r) serves as a variable in the radial and tangential distortion correction equations, Eqs. (2) and (3).
Where r2, r4, r6, and r8 correspond to the squared, fourth, sixth, and eighth powers of the radius, respectively, and represent the distortion correction terms for each order. Typically, only k1 and k2 are used for radial distortion correction; however, depending on the degree of distortion, k3 and k4 may also be applied. The distortion-corrected coordinates (x′, y′) are transformed from the camera image plane to the final projected point coordinates in the image coordinate system (in pixels: u, v). The conversion formulas, shown in Equations (4) and (5), account for distortions caused by irregular lens rotation (b1, b2) or principal point offset (cx, cy).
Descriptions of the variables used in the equations are provided in Table 6. Metashape utilizes the precise coordinates of GCPs to transform camera position and rotation information into the ground coordinate system. The positional error is calculated based on the differences between the actual coordinates of the GCPs and CPs (Xmeasured, i), Ymeasured, i, Zmeasured, i) and the predicted coordinates (Xpredicted, i), Ypredicted, i, Zpredicted, i). Eq. (6) represents the Root Mean Square Error (RMSE) formula used to calculate the positional error for GCPs and CPs, where n denotes the total number of GCPs and CPs.
Table 6 . Description of the camera calibration parameters.
Parameter | Description | Parameter | Description |
---|---|---|---|
X, Y, Z | Point coordinates in the local camera coordinate system | w, h | Image width and height (in pixels) |
x, y | X/Z, Y/Z | x’, y’ | Divide Focal length corrected lens distortion |
X0, Y0, Z0 | Camera center coordinates | rij | Rotation matrix elements (ω, ø, κ) |
Metashape fundamentally calculates the camera’s exterior orientation parameters based on the collinearity equation. The collinearity condition states that a point on the ground, its corresponding point on the image, and the camera’s projection center must all lie in the same straight line. Eqs. (7) and (8) define the relationship between image coordinates and ground coordinates based on the interior and exterior orientation parameters (Kim et al., 2004).
Bundle adjustment is an optimization algorithm used in photogrammetry to simultaneously adjust camera parameters (both interior and exterior) and GCP coordinates to achieve optimal results (Moore et al., 2009; Triggs et al., 2000). This algorithm minimizes errors by utilizing multiple images. Optimization is performed using the nonlinear least squares method, specifically the Levenberg-Marquardt algorithm, based on image feature matching data that includes ground control points (Levenberg, 1944). The bundle adjustment equation is presented in Equation (9), where measuredij represents the observed image coordinates, and projectedij represents the image coordinates estimated by the model.
The amount of lens distortion varies depending on the field of view. Especially when wide angle lenses are used for UAV aerial imaging, lens distortion significantly affects the accuracy of the image, making the proper configuration of interior orientation parameters essential (Alemán-Flores et al., 2013). In this study, experiments were conducted in two modes, All mode and Part mode, to analyze the errors due to different interior orientation parameter settings.
The All mode optimizes all interior orientation parameters to achieve the highest possible accuracy. In contrast, the Part mode selectively optimizes only certain parameters, excluding k4, b1, and b2. This mode corresponds to the default setting in UAV photogrammetry software such as Agisoft Metashape and Pix4D Mapper, aiming to enhance data processing efficiency while focusing only on essential parameters for modeling. The interior orientation parameter values for each mode are presented in Table 7.
Table 7 . Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes.
Parameter | All | Part | |||
---|---|---|---|---|---|
Value | Error | Value | Error | ||
Focal length | f | 3702.09875 | 0.062302 | 3705.2321 | 0.06365 |
Principal point offset | cx | 27.0052 | 0.01989 | 26.876 | 0.017711 |
cy | -3.63647 | 0.020407 | -3.71158 | 0.017975 | |
Skew coefficients | b1 | 0.0640683 | 0.00229 | ||
b2 | -0.178395 | 0.0023 | |||
Radial distortion coefficients | k1 | -0.0751382 | 0.0000267 | -0.0932475 | 0.0000165 |
k2 | -0.156228 | 0.000125 | -0.0522055 | 0.0000538 | |
k3 | 0.259075 | 0.00023 | 0.0368672 | 0.0000559 | |
k4 | -0.156688 | 0.000142 | |||
Tangential distortion coefficients | p1 | 0.0000219464 | 0.000000744 | 8.53919e-06 | 0.000000656 |
p2 | -0.0000554343 | 0.000000712 | -0.0000562457 | 0.000000622 | |
Lat Lon | 0.051 | 0.078 | |||
XY | 0.035 | 0.051 |
Previous studies have analyzed RMSE values under different interior orientation parameter settings (Nho et al., 2020). Comparing Case 1 (Exterior Orientation Parameter [EOP], f, cx, cy, k1, k2) and Case 2 (EOP, f, cx, cy) showed that Case 2 had a lower RMSE by 0.077 m on the x-axis and 0.063 m on the y-axis. These results suggest that excluding radial distortion coefficients may improve accuracy. However, this effect is primarily observed in cameras with minimal radial distortion or when in-camera distortion correction is applied. Furthermore, the difference was not statistically significant.
Based on this, this study conducted a more detailed comparison of errors according to interior orientation parameter settings. In All mode, the Lat Lon error was 0.051m and the XY error was 0.035m, showing a low level of error. In contrast, the Part mode showed higher errors, with a Lat Lon error of 0.078 m and an XY error of 0.051 m. These results indicate that the range of interior orientation parameter settings affects the accuracy of the data.
To analyze the differences between All mode and Part mode in more detail, a custom Lens Distortion Simulation Python script was used to calculate the radial distortion, tangential distortion, and total lens distortion for each mode. These values were calculated based on the interior orientation parameters. The calculated results are presented in Table 8, while the differences in lens distortion and final coordinates (u, v) between the two modes are shown in Table 9.
Table 8 . Comparison of lens, radial, and tangential distortion between All mode and Part mode.
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | ||
---|---|---|---|---|---|---|---|---|---|---|
All | Min | 0.000 | 0.000 | -0.026 | -0.342 | 172.335 | 53.934 | 0.000 | 0.000 | 0.000 |
Max | 239.558 | 179.487 | 0.304 | 0.000 | 5161.923 | 3894.441 | 299.787 | 299.339 | 0.457 | |
Average | 50.633 | 33.591 | 0.050 | -0.096 | 2667.055 | 1974.268 | 64.064 | 64.064 | 0.115 | |
Standard deviation | 53.289 | 34.878 | 0.066 | 0.066 | 1475.948 | 1113.929 | 60.367 | 60.367 | 0.084 | |
Part | Min | 0.000 | 0.000 | -0.101 | -0.308 | 174.684 | 55.373 | 0.000 | 0.000 | 0.000 |
Max | 233.251 | 174.761 | 0.216 | 0.000 | 5159.165 | 3892.848 | 291.815 | 291.457 | 0.376 | |
Average | 51.708 | 34.400 | 0.020 | -0.097 | 2666.896 | 1974.191 | 65.519 | 65.518 | 0.108 | |
Standard deviation | 53.734 | 35.127 | 0.056 | 0.064 | 1474.696 | 1113.006 | 60.710 | 60.710 | 0.073 |
Table 9 . Differences in lens, radial, and tangential distortion between All mode and Part mode.
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | |
---|---|---|---|---|---|---|---|---|---|
Min | –2.704 | –2.025 | 0.000 | –0.034 | –6.131 | –4.685 | –2.958 | –2.875 | –0.061 |
Max | 6.307 | 4.726 | 0.092 | 0.042 | 6.574 | 4.843 | 7.980 | 7.881 | 0.081 |
Average | –1.075 | –0.808 | 0.031 | 0.001 | 0.160 | 0.076 | –1.455 | –1.455 | 0.008 |
Standard deviation | 0.769 | 0.574 | 0.024 | 0.013 | 1.347 | 0.991 | 0.783 | 0.782 | 0.026 |
The equations for calculating radial and tangential distortion in the X and Y directions are presented in Eqs. (2) and (3). The average values of Xradial and Yradial were 50.633 and 33.591, respectively, in All mode, while in Part mode, they were 51.708 and 34.400 showing slightly higher values than in All mode.
The standard deviations were ±53.289, ±34.878 in All mode and ±53.734, ±35.127 in Part mode, indicating slightly greater variability in Part mode. This suggests that lens distortion correction in the Part mode may be somewhat less consistent.
These results indicate that setting the radial distortion coefficient affects both the lens distortion correction and its variability. Both modes exhibited an increasing radial distortion trend with increasing distance from the center, suggesting that the configuration of the interior orientation parameters has little effect on the overall radial distortion pattern.
In the All mode, the mean values of Xtangential and Ytangential were 0.050 and –0.096, respectively, while in the Part mode, they were 0.020 and –0.097. In both modes, tangential distortion was significantly smaller than radial distortion.
The skew coefficient is a variable that adjusts the correlation of the slopes between coordinates and is used to calculate the image coordinate u. It was, therefore, expected that the skew coefficient would have a significant effect on the image coordinates. The results of this study indicate that its effect is minimal. The mean values and standard deviations of the image coordinates (u, v) in the All mode were slightly higher compared to the Part mode. However, this difference is not statistically or practically significant, and it is judged that omitting the skew coefficient would not have a substantial impact on the results when the camera’s tilt is not large. As shown in Table 9, the difference between the All mode and the Part mode was 0.160±1.347 pixels for the u coordinate and 0.076±0.991 pixels for the v coordinate, both of which were at a very small level of less than 2 pixels. This suggests that the skew factor has a very limited effect on the calculation of the image coordinates.
Lens distortion includes both radial and tangential distortion, and the average lens distortion values for the All mode and the Part mode were 64.064 and 65.519, respectively, indicating similar levels. However, in terms of maximum values, the All mode showed 299.787, while the Part mode showed 291.815, a difference of 7.972 pixels. Converted to GSD 2.09 cm/pixel, the maximum error in Part mode is approximately 16.66 cm. However, this error only occurs in a few pixels at the four corners of the image, and most pixels have an error of less than 3 pixels.
The distribution of lens distortion and the difference between the two modes in the All mode and the Part mode are shown in Fig. 3. The central part of the difference graph shows values close to 0, indicating that there is almost no difference in lens distortion between the two modes in the central region. In contrast, a significant difference was observed in the blue region at the edges. Assuming that the lens distortion correction in the All mode is accurate, it is likely that the Part mode failed to correct the lens distortion at the edges, resulting in errors of more than 7 pixels.
In this study, to analyze the effect of the number of GCPs on the accuracy of orthomosaic generation, the experiment was conducted by gradually reducing the number of GCPs from 22 to 4. The final four retained GCPs were placed at the outer corners of the playground. This arrangement was designed based on a previous study, which suggested that placing 3 to 4 GCPs outside the target area and adding one at the center to form a centric polygonal network configuration is an effective strategy for improving accuracy (Kim et al., 2018). Additionally, this arrangement was made with the intention of maintaining the accuracy of the orthomosaic’s outer region while using the minimum number of GCPs. The final four retained GCPs are Nos. 1, 4, 18, and 21, as shown in Fig. 4.
According to previous studies, as the number of GCPs increases, positional accuracy improves (Shylesh et al., 2023). In particular, the number of GCPs has been reported to have a greater impact on vertical positional accuracy than on planar positional accuracy (Yun and Sung, 2018).
In this study, the error analysis was performed by gradually reducing the number of GCPs and converting the reduced GCPs into CPs. The analysis of positional and pixel errors according to the number of GCPs and CPs was performed in both the All and Part modes, and the results are shown in Fig. 5. In Fig. 5(a), an increasing trend in error was observed in both All mode and Part mode as the number of GCPs decreased. In All mode, the positional error of GCPs increased gradually, whereas Part mode exhibited slight variations. However, regardless of the number of GCPs, the positional error in All mode remained consistently lower than in Part mode, with an average difference of approximately 0.015 m.
In Fig. 5(b), Part mode exhibits a clear pattern of rapidly increasing pixel error as the number of GCPs decreases. In contrast, in All mode, the pixel error remains relatively constant, with a more gradual increase in error. This indicates that Part mode, which has limited interior orientation parameters, is more sensitive to changes in the number of GCPs. The pixel error difference between All mode and Part mode averaged approximately 2.746 pixels.
In Fig. 5(c), the positional error of CPs shows significant variability in Part mode. While All mode also exhibited an increasing trend in error, the magnitude of the increase was notably smaller compared to Part mode. The CP positional error between All mode and Part mode ranged from 0.13 to 0.30 m, with an average difference of 0.22 m.
In Fig. 5(d), Part mode shows a tendency for CP pixel error to increase sharply as the number of GCPs decreases. In particular, the CP pixel error continued to increase significantly until the number of GCPs decreased to 15. However, when the number of GCPs decreased to 14 or fewer, a decreasing trend in error was observed. In contrast, in All mode, the CP pixel error remained relatively constant, showing a stable pattern that was not highly sensitive to changes in the number of GCPs. The difference in CP pixel error between All mode and Part mode averaged approximately 2.330 pixels.
In all the graphs, as the number of GCPs decreased, the increase in error in Part mode became more pronounced. This indicates that a reduction in the number of GCPs has a negative impact on orthomosaic accuracy. However, in All mode, the error either increased gradually or remained consistent, showing relatively stable accuracy. This suggests that All mode is less sensitive to changes in the number of GCPs compared to Part mode and can maintain stable accuracy even with fewer GCPs.
Previous studies have shown that the quality of the GCP affects the positional accuracy of the orthomosaic (Lee et al., 2020). Furthermore, other studies have reported that RMSE values in the X, Y, and Z axes are lower when using high-accuracy GCPs compared to low-accuracy GCPs (Shylesh et al., 2023). These findings suggest that the accuracy of GCPs plays a critical role in determining the positional accuracy of UAV images.
Based on these research findings, this study analyzed the impact of GCP positional accuracy on UAV-based orthomosaic generation. To this end, the positional errors were compared between the precise coordinates of GCPs and the inaccurate coordinates generated by adding random offsets. The random offsets were set at seven levels: 0.03 m, 0.05 m, 0.07 m, 0.10 m, 0.20 m, 0.50 m, and 1.00 m. These values were randomly generated within a range of –n to +n (where n represents the offset level) using Kutools in Excel.
The analysis results showed that as the range of random offsets added to the GCP coordinates increased, the positional error of the GCPs showed a linear increase. This trend is clearly shown in the graph in Fig. 6. In particular, the R2 values for the All mode and the Part mode were 0.9995 and 0.9988, respectively, demonstrating a high degree of linearity close to 0.999.
The regression equation for the All mode was calculated as y = 1.0653x, and for the Part mode, it was y = 1.072x. The slope difference between the two modes was 0.0067, which is very small, indicating almost no difference in GCP errors and showing similar trends. Additionally, it was observed that as the positional accuracy of the GCPs increased, the total error decreased, highlighting that precise management of GCP coordinates is essential for improving orthomosaic accuracy.
The directional differences in GCP positional errors between the All mode and the Part mode are shown in Table 10. The error differences in both the East and North directions were all below 0.003 m and remained very small. In the North direction, the error was particularly close to 0, indicating that the differences between the two modes were minimal. In the altitude direction, a maximum difference of 0.020m was observed, but this difference is considered negligible in UAV-based data analysis.
Table 10 . Difference of GCP positional errors for All and Part mode.
East err | North err | Alt err | Error (m) | Error (pix) | |
---|---|---|---|---|---|
Original | 0.003 | 0.001 | 0.020 | 0.016 | 2.616 |
0.03 | 0.001 | - | 0.018 | 0.014 | 2.616 |
0.05 | 0.002 | - | 0.002 | 0.001 | 2.616 |
0.07 | 0.002 | 0.002 | 0.016 | 0.012 | 2.616 |
0.10 | –0.001 | - | –0.003 | –0.003 | 2.616 |
0.20 | 0.001 | - | 0.015 | 0.008 | 2.615 |
0.50 | 0.002 | 0.001 | 0.008 | 0.006 | 2.611 |
1.00 | 0.002 | 0.001 | 0.001 | 0.002 | 2.616 |
The difference in total error was very low, with a maximum value of 0.016m, while the pixel error difference remained relatively constant at approximately 2.616 pixels in most cases. In all cases, the pixel error remained within 3 pixels, demonstrating stable accuracy.
In particular, the interior orientation parameter settings were found to have minimal impact on GCP positional errors, suggesting that in situations requiring rapid processing, omitting certain interior orientation parameters can still maintain a sufficient level of accuracy. This provides a significant foundation for generating reliable orthophotos even when processing time is reduced. Therefore, ensuring the accuracy of GCP coordinates is a crucial factor in UAV-based spatial data analysis and applications, and it is expected to enable more precise data-driven decisionmaking and applications.
In this study, modes were distinguished based on the interior orientation parameter settings to analyze lens distortion, and the impact of the number of GCPs and their positional accuracy on the accuracy of UAV orthophotos was evaluated.
The results showed that the All mode exhibited a low error, with Lat-Lon and XY errors of 0.051 m and 0.035 m, respectively, whereas the Part mode showed relatively higher errors of 0.078 m for Lat-Lon and 0.051 m for XY. This suggests that the settings of interior orientation parameters can influence the positional accuracy of orthoimages. The mean and standard deviation of Xradial and Yradial were higher in the Part mode than in the All mode, indicating greater variability in the Part mode. Additionally, the difference in Xtangential and Ytangential between the two modes was minimal, within 0.031. In both modes, the radial distortion was significantly greater than the tangential distortion. The difference between the All and Part modes in image coordinates was 0.160±1.347 pixels in the u coordinate and 0.076±0.991 pixels in the v coordinate, both showing a small discrepancy of less than two pixels.
To analyze the performance differences between the two modes, experiments were conducted by adjusting the number and positional accuracy of the GCPs. As the number of GCPs decreased, both the positional and pixel errors of the GCPs increased. The pixel error difference of GCPs between the All and Part modes averaged 2.746 pixels, while the pixel error difference of CPs averaged 2.330 pixels. This was similar to the difference in image coordinates previously calculated between the two modes.
As the positional accuracy of the GCP coordinates decreased, the GCP positional errors showed a tendency to increase linearly. The R2 values for the All and Part modes were 0.9995 and 0.9988, respectively, indicating a high degree of linearity, and the difference in slopes between the two modes was very small at 0.0067, resulting in nearly no difference in GCP errors. When comparing the positional errors of the GCPs by direction, except the case where a random offset of 0.10m level was added, the positional errors in the All mode were smaller than those in the Part mode in all other cases. The difference in pixel errors between the two modes averaged 2.615 pixels, which is similar to the previously calculated difference in image coordinates between the two modes.
This study comprehensively analyzed the settings of interior orientation parameters, the number of GCPs, and the positional accuracy of GCPs, systematically evaluating the impact of the interactions between these factors on the quality of UAV-based orthomosaics. When setting interior orientation parameters, it is essential to carefully select between All mode and Part mode based on the number of GCPs, project accuracy requirements, and processing time constraints. The results demonstrated that these appropriate interior orientation parameter settings, along with precise management of GCPs, play a crucial role in improving the positional accuracy of UAV orthomosaics. This approach distinguishes itself from previous studies, which primarily focused on the analysis of individual factors. This research is expected to serve as a valuable reference for future UAV-based surveying and remote sensing applications.
This study was conducted under a specific experimental environment (flight altitude of 70 m, DJI M3E sensor, and a main sports field), which may limit the generalization of the results to various sensors and environments. Future studies could expand experiments by considering factors such as aperture size, scene types (e.g., urban, forest, marine), and Instantaneous Field of View (IFOV). Additionally, since this study was conducted in a relatively flat area, the impact of GCP elevation differences was considered negligible. However, in areas with significant terrain variations, further investigation into the effects of GCP elevation changes on orthomosaic accuracy would be valuable.
This work was supported by a Research Grant of Pukyong National University (2023).
No potential conflict of interest relevant to this article was reported.
Table 1 . Comparison of UAV flight data and camera settings by flight path.
Parameter | N-S Direction | E-W Direction |
---|---|---|
Start time | 15:33:31 | 15:45:17 |
End time | 15:42:48 | 15:54:52 |
Flight time | 00:09:17 | 00:09:35 |
Photo count | 105 | 111 |
Flight height (m) | 70 | 70 |
Number of strips | 6 | 12 |
F-stop | 3.2–4.5 | 2.8–4.0 |
Overlap (%) | 90 | 90 |
Sidelap (%) | 90 | 90 |
ISO | 100–110 | 100–110 |
Shutter speed | 1/400 | 1/400 |
Image quality (Mean±SD) | 0.855±0.023 | 0.863±0.019 |
ISO: International Standard Organization, SD: Standard Deviation..
Table 2 . Specification of Mavic 3 Enterprise (M3E).
General Specification | Value | Camera Specification | Value |
---|---|---|---|
Max speed (m/s) | 15 | Focal length (mm) | 12.29 (24/35 mm) |
Max flight Time (min) | 45 | F-stop | f/2.8–f/11 |
GSD (cm) | 2.86/100 × FH(m) | ISO | 100–6400 |
Image size | 5280 × 3956 (20MP) | Shutter speed | 1/2000-8 |
Field of view (°) | 84 | Shutter type | Mechanical |
Sensor | 4/3CMOS | CCD size (mm) | 17.73 × 13.29 |
Table 3 . Specification of GRX2.
Specification | Value | ||
---|---|---|---|
Tracked Signals | GPS, GLONASS, SBAS | ||
Number of Channels | 226 | ||
Positioning Accuracy (L1+L2) | Type | Horizontal | Vertical |
Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Fast Static | 3mm + 0.5 ppm | 5mm + 0.5 ppm | |
Kinematic | 10mm + 1 ppm | 15mm + 1 ppm | |
RTK | 10mm + 1 ppm | 15mm + 1 ppm | |
Positioning Accuracy | DGPS: < 0.5m | ||
Update/Output rate | 1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard) | ||
Physical Specifications | Size: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.) |
Table 4 . Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m).
No. | Total Station | GNSS | Easting (X) | Northing (Y) | Height (Z) | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|---|---|---|---|
Base | 1 | 1 | 208,213.370 | 280,231.334 | 110.427 | |||
0 | 1 | 1 | 208,171.243 | 280,161.693 | 110.419 | |||
1 | 2 | 208,186.660 | 280,131.747 | 110.345 | 0.004 | 0.003 | ||
2 | 2 | 1 | 208,167.470 | 280,143.805 | 110.382 | 0.023 | 0.019 | 0.008 |
4 | 2 | 208,134.579 | 280,164.153 | 110.383 | 0.004 | 0.002 | ||
5 | 2 | 208,153.061 | 280,193.443 | 110.360 | 0.001 | |||
6 | 2 | 1 | 208,172.131 | 280,181.181 | 110.507 | 0.013 | 0.013 | 0.013 |
7 | 2 | 1 | 208,188.867 | 280,171.596 | 110.454 | 0.006 | 0.005 | 0.009 |
8 | 2 | 208,205.180 | 280,160.774 | 110.332 | 0.003 | 0.012 | 0.003 | |
9 | 2 | 208,216.832 | 280,180.520 | 110.347 | 0.004 | 0.002 | ||
10 | 2 | 1 | 208,209.429 | 280,182.120 | 110.401 | 0.013 | 0.013 | 0.016 |
11 | 2 | 1 | 208,200.048 | 280,189.952 | 110.462 | 0.014 | 0.012 | 0.040 |
12 | 2 | 1 | 208,184.312 | 280,202.069 | 110.527 | 0.012 | 0.006 | 0.013 |
13 | 2 | 208,165.218 | 280,212.792 | 110.363 | 0.004 | |||
14 | 2 | 208,172.761 | 280,225.059 | 110.367 | 0.003 | 0.007 | 0.002 | |
15 | 2 | 1 | 208,193.366 | 280,216.145 | 110.512 | 0.015 | 0.006 | 0.009 |
16 | 2 | 1 | 208,210.515 | 280,207.497 | 110.464 | 0.004 | 0.013 | 0.035 |
17 | 2 | 1 | 208,228.174 | 280,197.557 | 110.332 | 0.004 | 0.024 | 0.034 |
18 | 2 | 208,242.606 | 280,221.518 | 110.400 | 0.006 | 0.000 | ||
19 | 2 | 1 | 208,224.028 | 280,230.563 | 110.416 | 0.008 | 0.004 | 0.022 |
20 | 2 | 1 | 208,208.466 | 280,240.887 | 110.418 | 0.022 | 0.004 | 0.028 |
21 | 2 | 1 | 208,191.048 | 280,253.412 | 110.390 | 0.023 | 0.010 | 0.031 |
Table 5 . Standard deviation of GCP coordinates measured using total station and GNSS receiver.
Instrument | Count | SD (X) | SD (Y) | SD (Z) |
---|---|---|---|---|
Total Station (GCP) | 42 | 0.002 | 0.004 | 0.002 |
GNSS Receiver (Baseline) | 14 | 0.033 | 0.066 | 0.022 |
Table 6 . Description of the camera calibration parameters.
Parameter | Description | Parameter | Description |
---|---|---|---|
X, Y, Z | Point coordinates in the local camera coordinate system | w, h | Image width and height (in pixels) |
x, y | X/Z, Y/Z | x’, y’ | Divide Focal length corrected lens distortion |
X0, Y0, Z0 | Camera center coordinates | rij | Rotation matrix elements (ω, ø, κ) |
Table 7 . Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes.
Parameter | All | Part | |||
---|---|---|---|---|---|
Value | Error | Value | Error | ||
Focal length | f | 3702.09875 | 0.062302 | 3705.2321 | 0.06365 |
Principal point offset | cx | 27.0052 | 0.01989 | 26.876 | 0.017711 |
cy | -3.63647 | 0.020407 | -3.71158 | 0.017975 | |
Skew coefficients | b1 | 0.0640683 | 0.00229 | ||
b2 | -0.178395 | 0.0023 | |||
Radial distortion coefficients | k1 | -0.0751382 | 0.0000267 | -0.0932475 | 0.0000165 |
k2 | -0.156228 | 0.000125 | -0.0522055 | 0.0000538 | |
k3 | 0.259075 | 0.00023 | 0.0368672 | 0.0000559 | |
k4 | -0.156688 | 0.000142 | |||
Tangential distortion coefficients | p1 | 0.0000219464 | 0.000000744 | 8.53919e-06 | 0.000000656 |
p2 | -0.0000554343 | 0.000000712 | -0.0000562457 | 0.000000622 | |
Lat Lon | 0.051 | 0.078 | |||
XY | 0.035 | 0.051 |
Table 8 . Comparison of lens, radial, and tangential distortion between All mode and Part mode.
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | ||
---|---|---|---|---|---|---|---|---|---|---|
All | Min | 0.000 | 0.000 | -0.026 | -0.342 | 172.335 | 53.934 | 0.000 | 0.000 | 0.000 |
Max | 239.558 | 179.487 | 0.304 | 0.000 | 5161.923 | 3894.441 | 299.787 | 299.339 | 0.457 | |
Average | 50.633 | 33.591 | 0.050 | -0.096 | 2667.055 | 1974.268 | 64.064 | 64.064 | 0.115 | |
Standard deviation | 53.289 | 34.878 | 0.066 | 0.066 | 1475.948 | 1113.929 | 60.367 | 60.367 | 0.084 | |
Part | Min | 0.000 | 0.000 | -0.101 | -0.308 | 174.684 | 55.373 | 0.000 | 0.000 | 0.000 |
Max | 233.251 | 174.761 | 0.216 | 0.000 | 5159.165 | 3892.848 | 291.815 | 291.457 | 0.376 | |
Average | 51.708 | 34.400 | 0.020 | -0.097 | 2666.896 | 1974.191 | 65.519 | 65.518 | 0.108 | |
Standard deviation | 53.734 | 35.127 | 0.056 | 0.064 | 1474.696 | 1113.006 | 60.710 | 60.710 | 0.073 |
Table 9 . Differences in lens, radial, and tangential distortion between All mode and Part mode.
Xradial | Yradial | Xtangential | Ytangential | u | v | Lens distortion | Radial distortion | Tangential distortion | |
---|---|---|---|---|---|---|---|---|---|
Min | –2.704 | –2.025 | 0.000 | –0.034 | –6.131 | –4.685 | –2.958 | –2.875 | –0.061 |
Max | 6.307 | 4.726 | 0.092 | 0.042 | 6.574 | 4.843 | 7.980 | 7.881 | 0.081 |
Average | –1.075 | –0.808 | 0.031 | 0.001 | 0.160 | 0.076 | –1.455 | –1.455 | 0.008 |
Standard deviation | 0.769 | 0.574 | 0.024 | 0.013 | 1.347 | 0.991 | 0.783 | 0.782 | 0.026 |
Table 10 . Difference of GCP positional errors for All and Part mode.
East err | North err | Alt err | Error (m) | Error (pix) | |
---|---|---|---|---|---|
Original | 0.003 | 0.001 | 0.020 | 0.016 | 2.616 |
0.03 | 0.001 | - | 0.018 | 0.014 | 2.616 |
0.05 | 0.002 | - | 0.002 | 0.001 | 2.616 |
0.07 | 0.002 | 0.002 | 0.016 | 0.012 | 2.616 |
0.10 | –0.001 | - | –0.003 | –0.003 | 2.616 |
0.20 | 0.001 | - | 0.015 | 0.008 | 2.615 |
0.50 | 0.002 | 0.001 | 0.008 | 0.006 | 2.611 |
1.00 | 0.002 | 0.001 | 0.001 | 0.002 | 2.616 |
Gyeong-Su Jeong, Jong-Hwa Park
Korean J. Remote Sens. 2025; 41(1): 73-86Hoyong Ahn, Seungchan Lim, Chansol Kim, Cheonggil Jin, Junggon Han, Chuluong Choi
Korean J. Remote Sens. 2024; 40(6): 975-989Seunghyeok Choi, Seunghwan Ban, Taejung Kim
Korean J. Remote Sens. 2024; 40(5): 419-429