Research Article

Split Viewer

Korean J. Remote Sens. 2025; 41(1): 87-100

Published online: February 28, 2025

https://doi.org/10.7780/kjrs.2025.41.1.8

© Korean Society of Remote Sensing

Analyzing the Impact of Interior Orientation Parameter Settings, the Number of GCPs, and GCP Positional Accuracy on Orthomosaic Quality

Chansol Kim1 , Seungchan Lim1, Donggyu Kim2, Hohyun Jeong3 , Chuluong Choi4*

1Master Student, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
2Undergraduate Student, Major of Geomatics Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
3Senior Researcher, Spatial Information Research Institute, LX, Wanju, Republic of Korea
4Professor, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea

Correspondence to : Chuluong Choi
E-mail: cuchoi@pknu.ac.kr

Received: February 5, 2025; Revised: February 17, 2025; Accepted: February 14, 2025

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Unmanned Aerial Vehicle (UAV) orthomosaics are widely used in various fields, including construction, environmental monitoring, and real estate, and their quality is influenced by the accuracy of interior orientation. In this study, the modes were divided into “All mode” and “Part mode” based on the interior orientation parameter settings, and lens distortion was compared between the two modes. Additionally, the effects of the number of Ground Control Points (GCPs) and their positional accuracy on the positional accuracy of UAV-based orthomosaics were evaluated for each mode. The Part mode, which applies only a subset of interior orientation parameters, exhibited greater LatLon and XY errors compared to the All mode, which applies all parameters. Using custom Python scripts, lens distortion was compared between the two modes, and the image coordinate deviations were found to be 0.160±1.347 pixels in u (image x) and 0.076±0.991 pixels in v (image y), both of which were below 2 pixels. As the number of GCPs decreased, both modes exhibited an increasing trend in GCP positional error. In terms of GCP and CP pixel errors, the All mode demonstrated a lower and more consistent error compared to the Part mode, as it was less sensitive to changes in the number of GCPs. The addition of a random offset to the GCP coordinates to vary the GCP positional accuracy showed that as the magnitude of the added offset increased, the GCP positional error exhibited a linear increase. These findings suggest that the setting of interior orientation parameters and GCP management are critical factors in determining the accuracy of UAV orthomosaics. This study is expected to provide valuable foundational data for analyzing error factors in the UAV-based orthomosaic creation process, which can be utilized in both practical and research settings.

Keywords Unmanned aerial vehicle, Orthomosaic, Ground control point, Lens distortion, Radial distortion, Tangential distortion, Positional error

Orthomosaics are an integral part of geospatial information acquisition and analysis using Unmanned Aerial Vehicles (UAVs). They are widely used in a variety of fields, including construction, environmental management, and real estate (National Geographic Information Institute, 2025), and have established themselves as indispensable tools, particularly in applications requiring highprecision spatial data.

However, during the production of orthomosaics, various factors such as interior orientation parameters, the number of Ground Control Points (GCPs), and the positional accuracy of the GCPs can introduce X, Y, and pixel errors that impact the positional accuracy of the orthomosaics. These errors not only degrade the quality of the seamlines in the orthomosaics but also negatively impact the reliability of the analysis results based on them (Harwin and Lucieer, 2012).

Previous studies have examined the influence of GCP placement patterns and flight altitude on orthomosaic accuracy (Kim et al., 2018; Kim and Hong, 2020). Additionally, research has investigated the effects of image overlap and the number of GCPs on orthomosaic quality (Yoo et al., 2016). Furthermore, previous findings indicate that the positioning of GCPs and the distance between GCPs and Check Points (CPs) significantly affect accuracy (Yun and Yoon, 2018; Lee, 2021).

One of the key factors affecting the accuracy of orthomosaics is interior orientation errors, which distort the precise correspondence between images and the actual terrain, leading to a decrease in the positional accuracy of the orthomosaic. This issue becomes more pronounced in areas with significant terrain distortion. Interior orientation errors cause misalignment between aerial images during the orthomosaic generation process, resulting in visible seamlines that degrade image quality. These visual discrepancies along the seamlines can confuse users, making the interpretation and utilization of the orthomosaic more challenging.

However, studies that systematically and quantitatively analyze the impact of interior orientation parameters on orthomosaic positional accuracy are still limited. Furthermore, research proposing efficient and optimized methodologies that can be applied in practical settings is scarce.

To address this limitation, this study analyzes error factors from three main aspects. First, to evaluate the impact of interior orientation parameters, the lens distortion in the All mode, which applies all interior orientation parameters, and the Part mode, which applies only a subset, was compared. For this analysis, a custom Python script was used. Second, to assess the effect of the number of GCPs, the number of GCPs was gradually reduced from 22 to 4, and the reduced GCPs were converted into CPs for comparison of positional error and pixel error. Third, to analyze the effect of GCP positional accuracy, random offsets ranging from 0.03 m to 1.00 m were generated in 7 steps and added to each GCP coordinate. The positional errors of the GCPs were then compared and analyzed for each mode.

This study systematically analyzes the error factors that arise during the orthomosaic generation process using UAVs, taking into account the settings of interior orientation parameters. In doing so, it aims to propose a method that improves both the precision and efficiency of orthomosaics. The findings of this study are expected to enhance the applicability of UAV imagery data and provide a foundation for improving the reliability and efficiency of orthomosaics in both practical and research settings.

This paper is structured as follows. Chapter 2 discusses the theoretical background and methodology of the study. Chapter 3 analyzes the errors based on the settings of interior orientation parameters, the number of GCPs, and GCP positional accuracy, and presents the key findings derived from this analysis. Finally, Chapter 4 discusses the conclusions and implications of the study.

2.1. Data Acquisition and Methodology

The study was conducted on 11 November 2024 at the main sports field of Pukyong National University’s Yongdang Campus, located at 365, Sinseon-ro, Nam-gu, Busan, Republic of Korea. The workflow of this study is presented in Fig. 1. Two flight paths were planned using DJI Flighthub2, one in the north-south (N–S) direction and the other in the east-west (E–W) direction. The flight altitude was maintained at 70±0.04 m, with both overlap and sidelap set to 90%. The interval between shots was between 5.1 and 5.3 seconds. The total flight distances were 843.0 m for the N-S direction and 895.0 m for the E-W direction.

Fig. 1. Flowchart of this study.

Each flight lasted 557 and 575 seconds, capturing 105 photos in the north-south direction and 111 photos in the east-west direction. The flight speed was maintained between 1.47 and 1.61 m/s, significantly lower than the maximum flight speed of the DJI Mavic 3 Enterprise (M3E), which is 15 m/s (Lee et al., 2018; DJI Mavic 3 Enterprise, 2024), to ensure precision photography. The study area covered 7,578.8 m2, with an input Ground Sampling Distance (GSD) of 2.00 cm/pixel and an output GSD of 2.09 cm/pixel. Detailed flight information is provided in Table 1.

Table 1 Comparison of UAV flight data and camera settings by flight path

ParameterN-S DirectionE-W Direction
Start time15:33:3115:45:17
End time15:42:4815:54:52
Flight time00:09:1700:09:35
Photo count105111
Flight height (m)7070
Number of strips612
F-stop3.2–4.52.8–4.0
Overlap (%)9090
Sidelap (%)9090
ISO100–110100–110
Shutter speed1/4001/400
Image quality (Mean±SD)0.855±0.0230.863±0.019

ISO: International Standard Organization, SD: Standard Deviation.



A total of 23 grid-shaped GCPs, each measuring 50 × 50 cm, were evenly distributed within the study area. However, due to damage to GCP No. 3 during the experiment, only 22 GCPs were ultimately used. Image acquisition was carried out using the DJI M3E, as shown in Fig. 2(a), by uploading the pre-designed flight path to its controller.

Fig. 2. UAV and used equipment in this study.

The DJI M3E is a high-performance compact UAV designed for professional applications such as surveying and mapping. Equipped with a 20 MP high-resolution camera and a Real-Time Kinematic (RTK) module, it is capable of collecting spatial data with centimeter-level precision. Notably, it offers a long flight time of up to 45 minutes and a maximum transmission range of 15 km, ensuring high operational efficiency over large areas. The main specifications of the M3E are shown in Table 2.

Table 2 Specification of Mavic 3 Enterprise (M3E)

General SpecificationValueCamera SpecificationValue
Max speed (m/s)15Focal length (mm)12.29 (24/35 mm)
Max flight Time (min)45F-stopf/2.8–f/11
GSD (cm)2.86/100 × FH(m)ISO100–6400
Image size5280 × 3956 (20MP)Shutter speed1/2000-8
Field of view (°)84Shutter typeMechanical
Sensor4/3CMOSCCD size (mm)17.73 × 13.29


The weather conditions during image acquisition were clear and sunny, with an average temperature of 17.3°C, a maximum of 23.1°C, a minimum of 13.0°C, and an average cloud cover of 1.3, with no precipitation (Korea Meteorological Administration, 2024).

GCP coordinate surveying was performed using the iM-55 total station and GRX2 Global Navigation Satellite System (GNSS) receiver from Sokkia, as shown in Figs. 2(b, c). The main specifications of the GRX2 GNSS receiver are presented in Table 3. A prism was installed at the center of each GCP, and the distance and angles from the Base GCP were measured using a total station. To ensure the reliability of the measured data, total station surveys for GCPs No. 1 to No. 20 were conducted twice, and the average values of the measurements were utilized. In addition, the coordinates of 14 GCPs, including the Base GCP and GCP No. 0, were measured using a GNSS receiver. The Base GCP coordinates were designated as the reference for calculating GCP coordinates, and the remaining 21 GCP coordinates were determined using the distance and angle data obtained from the total station.

Table 3 Specification of GRX2

SpecificationValue
Tracked SignalsGPS, GLONASS, SBAS
Number of Channels226
Positioning Accuracy (L1+L2)TypeHorizontalVertical
Static3mm + 0.5 ppm5mm + 0.5 ppm
Fast Static3mm + 0.5 ppm5mm + 0.5 ppm
Kinematic10mm + 1 ppm15mm + 1 ppm
RTK10mm + 1 ppm15mm + 1 ppm
Positioning AccuracyDGPS: < 0.5m
Update/Output rate1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard)
Physical SpecificationsSize: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.)


The GCP coordinates calculated using the data measured with the total station and GNSS receiver are presented in Table 4. The iM-55 is a high-precision surveying instrument with an angular measurement accuracy of 5 seconds and a distance measurement accuracy of (1.5 + 2 ppm × measurement distance) mm. As shown in Table 5, the survey results indicated that the relative positions of the GCPs had standard deviations of X: 2 mm, Y: 4 mm, and Z: 2 mm. The GRX2 is a 226-channel receiver capable of receiving GPS, GLONASS, and SBAS signals. The GCP coordinates surveyed using the GRX2 showed absolute position standard deviations of X: 33 mm, Y: 66 mm, and Z: 22 mm relative to the baseline.

Table 4 Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m)

No.Total StationGNSSEasting (X)Northing (Y)Height (Z)SD (X)SD (Y)SD (Z)
Base11208,213.370280,231.334110.427
011208,171.243280,161.693110.419
12208,186.660280,131.747110.3450.0040.003
221208,167.470280,143.805110.3820.0230.0190.008
42208,134.579280,164.153110.3830.0040.002
52208,153.061280,193.443110.3600.001
621208,172.131280,181.181110.5070.0130.0130.013
721208,188.867280,171.596110.4540.0060.0050.009
82208,205.180280,160.774110.3320.0030.0120.003
92208,216.832280,180.520110.3470.0040.002
1021208,209.429280,182.120110.4010.0130.0130.016
1121208,200.048280,189.952110.4620.0140.0120.040
1221208,184.312280,202.069110.5270.0120.0060.013
132208,165.218280,212.792110.3630.004
142208,172.761280,225.059110.3670.0030.0070.002
1521208,193.366280,216.145110.5120.0150.0060.009
1621208,210.515280,207.497110.4640.0040.0130.035
1721208,228.174280,197.557110.3320.0040.0240.034
182208,242.606280,221.518110.4000.0060.000
1921208,224.028280,230.563110.4160.0080.0040.022
2021208,208.466280,240.887110.4180.0220.0040.028
2121208,191.048280,253.412110.3900.0230.0100.031


Table 5 Standard deviation of GCP coordinates measured using total station and GNSS receiver

InstrumentCountSD (X)SD (Y)SD (Z)
Total Station (GCP)420.0020.0040.002
GNSS Receiver (Baseline)140.0330.0660.022


The captured UAV images were processed in the photogrammetry software Metashape, undergoing tie point generation and alignment procedures. The image quality was assessed using Metashape’s image quality estimation function. Image quality was assessed using Metashape’s image quality estimation function. The quality values of the images used in this study were found to be between 0.81 and 0.92, which are calculated based on the sharpness of the images. Generally, images with quality values below 0.5 are recommended for exclusion from processing (Agisoft LLC, 2025). All 216 images used in this study had quality values of 0.8 or higher, indicating high quality, with no further exclusions required.

During the image alignment process, the interior and exterior orientation parameters were calculated, and the relative positional relationships between images were optimized through bundle adjustment. Errors were then analyzed based on the interior orientation parameter settings, the number of GCPs, and the accuracy of GCP coordinates. This analysis aimed to quantitatively assess the error factors that may arise during the production of UAV-based orthomosaic and to provide foundational data for enhancing the accuracy of the results.

2.2. Theoretical Background

In UAV photogrammetry, lens distortion correction algorithms exhibit similar characteristics across different software, but they are not entirely identical. All models in Metashape assume a central projection camera and nonlinear distortions are modeled based on the Brown-Conrady lens distortion model.

The local camera coordinate system is defined with the camera’s projection center as the origin. In this system, the Z-axis points in the direction of the camera’s line of sight, the X-axis points horizontally to the right, and the Y-axis points vertically downward. Conversely, the image coordinate system uses the pixel center at the top-left corner of the image frame as its origin, with coordinates expressed in pixel units. These coordinate system definitions are essential for accurately modeling the camera’s interior orientation parameters and distortion corrections.

In this study, after generating tie points, the internal orientation parameters were calculated using the Agisoft Metashape software, and the interior orientation simulation was performed using a custom Python script. Interior orientation refers to the process of defining and calibrating the internal parameters of the optical system (camera), which is a critical step for establishing the precise relationship between digital images and real-world space. The interior orientation process involves determining the internal parameters of the optical system and correcting distortions. These parameters include the focal length (f), principal point offset (cx, cy), radial distortion coefficients (k1, k2, k3, k4), tangential distortion coefficients (p1, p2), and skew coefficients (affinity and non-orthogonality: b1, b2).

The Brown–Conrady model allows for the correction of both radial and tangential distortions (Kang et al., 2008). Radial distortion refers to the distortion that occurs radially outward from the lens center, while tangential distortion refers to distortion occurring in the tangential direction, perpendicular to the radial lines from the lens center (Kang et al., 2009). Generally, tangential distortion is significantly smaller compared to radial distortion (Beauchemin and Bajcsy, 2001). Accurate camera calibration is essential for transforming image coordinates into physical 3D spatial coordinates. Interior orientation parameters provide the foundational data required for exterior orientation, facilitating precise alignment and positional calculations between images.

Eq. (1) is a formula used to calculate the radius (r) from the principal point for each pixel. The radius (r) serves as a variable in the radial and tangential distortion correction equations, Eqs. (2) and (3).

r=x2+y2
x'=x1+k1r2+k2r4+k3r6+k4r8+p1r2+2x2+2p2xy
y'=y1+k1r2+k2r4+k3r6+k4r8+p2r2+2y2+2p1xy

Where r2, r4, r6, and r8 correspond to the squared, fourth, sixth, and eighth powers of the radius, respectively, and represent the distortion correction terms for each order. Typically, only k1 and k2 are used for radial distortion correction; however, depending on the degree of distortion, k3 and k4 may also be applied. The distortion-corrected coordinates (x′, y′) are transformed from the camera image plane to the final projected point coordinates in the image coordinate system (in pixels: u, v). The conversion formulas, shown in Equations (4) and (5), account for distortions caused by irregular lens rotation (b1, b2) or principal point offset (cx, cy).

u=w×0.5+cx+x'f+x'b1+y'b2
v=h×0.5+cy+y'f

Descriptions of the variables used in the equations are provided in Table 6. Metashape utilizes the precise coordinates of GCPs to transform camera position and rotation information into the ground coordinate system. The positional error is calculated based on the differences between the actual coordinates of the GCPs and CPs (Xmeasured, i), Ymeasured, i, Zmeasured, i) and the predicted coordinates (Xpredicted, i), Ypredicted, i, Zpredicted, i). Eq. (6) represents the Root Mean Square Error (RMSE) formula used to calculate the positional error for GCPs and CPs, where n denotes the total number of GCPs and CPs.

Table 6 Description of the camera calibration parameters

ParameterDescriptionParameterDescription
X, Y, ZPoint coordinates in the local camera coordinate systemw, hImage width and height (in pixels)
x, yX/Z, Y/Zx’, y’Divide Focal length corrected lens distortion
X0, Y0, Z0Camera center coordinatesrijRotation matrix elements (ω, ø, κ)


RMSEtotal=1/n i=1nXmeasured,iXpredicted,i2+Ymeasured,iYpredicted,i2+Zmeasured,iZpredicted,i2

Metashape fundamentally calculates the camera’s exterior orientation parameters based on the collinearity equation. The collinearity condition states that a point on the ground, its corresponding point on the image, and the camera’s projection center must all lie in the same straight line. Eqs. (7) and (8) define the relationship between image coordinates and ground coordinates based on the interior and exterior orientation parameters (Kim et al., 2004).

x'=fr11XX0+r12YY0+r13ZZ0r31XX0+r32YY0+r33ZZ0+Xc
y'=fr21XX0+r22YY0+r23ZZ0r31XX0+r32YY0+r33ZZ0+Yc

Bundle adjustment is an optimization algorithm used in photogrammetry to simultaneously adjust camera parameters (both interior and exterior) and GCP coordinates to achieve optimal results (Moore et al., 2009; Triggs et al., 2000). This algorithm minimizes errors by utilizing multiple images. Optimization is performed using the nonlinear least squares method, specifically the Levenberg-Marquardt algorithm, based on image feature matching data that includes ground control points (Levenberg, 1944). The bundle adjustment equation is presented in Equation (9), where measuredij represents the observed image coordinates, and projectedij represents the image coordinates estimated by the model.

ijmeasuredij projectedij 2

3.1. Error Analysis of Interior Orientation Parameter Settings

The amount of lens distortion varies depending on the field of view. Especially when wide angle lenses are used for UAV aerial imaging, lens distortion significantly affects the accuracy of the image, making the proper configuration of interior orientation parameters essential (Alemán-Flores et al., 2013). In this study, experiments were conducted in two modes, All mode and Part mode, to analyze the errors due to different interior orientation parameter settings.

The All mode optimizes all interior orientation parameters to achieve the highest possible accuracy. In contrast, the Part mode selectively optimizes only certain parameters, excluding k4, b1, and b2. This mode corresponds to the default setting in UAV photogrammetry software such as Agisoft Metashape and Pix4D Mapper, aiming to enhance data processing efficiency while focusing only on essential parameters for modeling. The interior orientation parameter values for each mode are presented in Table 7.

Table 7 Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes

ParameterAllPart
ValueErrorValueError
Focal lengthf3702.098750.0623023705.23210.06365
Principal point offsetcx27.00520.0198926.8760.017711
cy-3.636470.020407-3.711580.017975
Skew coefficientsb10.06406830.00229
b2-0.1783950.0023
Radial distortion coefficientsk1-0.07513820.0000267-0.09324750.0000165
k2-0.1562280.000125-0.05220550.0000538
k30.2590750.000230.03686720.0000559
k4-0.1566880.000142
Tangential distortion coefficientsp10.00002194640.0000007448.53919e-060.000000656
p2-0.00005543430.000000712-0.00005624570.000000622
Lat Lon0.0510.078
XY0.0350.051


Previous studies have analyzed RMSE values under different interior orientation parameter settings (Nho et al., 2020). Comparing Case 1 (Exterior Orientation Parameter [EOP], f, cx, cy, k1, k2) and Case 2 (EOP, f, cx, cy) showed that Case 2 had a lower RMSE by 0.077 m on the x-axis and 0.063 m on the y-axis. These results suggest that excluding radial distortion coefficients may improve accuracy. However, this effect is primarily observed in cameras with minimal radial distortion or when in-camera distortion correction is applied. Furthermore, the difference was not statistically significant.

Based on this, this study conducted a more detailed comparison of errors according to interior orientation parameter settings. In All mode, the Lat Lon error was 0.051m and the XY error was 0.035m, showing a low level of error. In contrast, the Part mode showed higher errors, with a Lat Lon error of 0.078 m and an XY error of 0.051 m. These results indicate that the range of interior orientation parameter settings affects the accuracy of the data.

To analyze the differences between All mode and Part mode in more detail, a custom Lens Distortion Simulation Python script was used to calculate the radial distortion, tangential distortion, and total lens distortion for each mode. These values were calculated based on the interior orientation parameters. The calculated results are presented in Table 8, while the differences in lens distortion and final coordinates (u, v) between the two modes are shown in Table 9.

Table 8 Comparison of lens, radial, and tangential distortion between All mode and Part mode

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
AllMin0.0000.000-0.026-0.342172.33553.9340.0000.0000.000
Max239.558179.4870.3040.0005161.9233894.441299.787299.3390.457
Average50.63333.5910.050-0.0962667.0551974.26864.06464.0640.115
Standard deviation53.28934.8780.0660.0661475.9481113.92960.36760.3670.084
PartMin0.0000.000-0.101-0.308174.68455.3730.0000.0000.000
Max233.251174.7610.2160.0005159.1653892.848291.815291.4570.376
Average51.70834.4000.020-0.0972666.8961974.19165.51965.5180.108
Standard deviation53.73435.1270.0560.0641474.6961113.00660.71060.7100.073


Table 9 Differences in lens, radial, and tangential distortion between All mode and Part mode

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
Min–2.704–2.0250.000–0.034–6.131–4.685–2.958–2.875–0.061
Max6.3074.7260.0920.0426.5744.8437.9807.8810.081
Average–1.075–0.8080.0310.0010.1600.076–1.455–1.4550.008
Standard deviation0.7690.5740.0240.0131.3470.9910.7830.7820.026


The equations for calculating radial and tangential distortion in the X and Y directions are presented in Eqs. (2) and (3). The average values of Xradial and Yradial were 50.633 and 33.591, respectively, in All mode, while in Part mode, they were 51.708 and 34.400 showing slightly higher values than in All mode.

The standard deviations were ±53.289, ±34.878 in All mode and ±53.734, ±35.127 in Part mode, indicating slightly greater variability in Part mode. This suggests that lens distortion correction in the Part mode may be somewhat less consistent.

These results indicate that setting the radial distortion coefficient affects both the lens distortion correction and its variability. Both modes exhibited an increasing radial distortion trend with increasing distance from the center, suggesting that the configuration of the interior orientation parameters has little effect on the overall radial distortion pattern.

In the All mode, the mean values of Xtangential and Ytangential were 0.050 and –0.096, respectively, while in the Part mode, they were 0.020 and –0.097. In both modes, tangential distortion was significantly smaller than radial distortion.

The skew coefficient is a variable that adjusts the correlation of the slopes between coordinates and is used to calculate the image coordinate u. It was, therefore, expected that the skew coefficient would have a significant effect on the image coordinates. The results of this study indicate that its effect is minimal. The mean values and standard deviations of the image coordinates (u, v) in the All mode were slightly higher compared to the Part mode. However, this difference is not statistically or practically significant, and it is judged that omitting the skew coefficient would not have a substantial impact on the results when the camera’s tilt is not large. As shown in Table 9, the difference between the All mode and the Part mode was 0.160±1.347 pixels for the u coordinate and 0.076±0.991 pixels for the v coordinate, both of which were at a very small level of less than 2 pixels. This suggests that the skew factor has a very limited effect on the calculation of the image coordinates.

Lens distortion includes both radial and tangential distortion, and the average lens distortion values for the All mode and the Part mode were 64.064 and 65.519, respectively, indicating similar levels. However, in terms of maximum values, the All mode showed 299.787, while the Part mode showed 291.815, a difference of 7.972 pixels. Converted to GSD 2.09 cm/pixel, the maximum error in Part mode is approximately 16.66 cm. However, this error only occurs in a few pixels at the four corners of the image, and most pixels have an error of less than 3 pixels.

The distribution of lens distortion and the difference between the two modes in the All mode and the Part mode are shown in Fig. 3. The central part of the difference graph shows values close to 0, indicating that there is almost no difference in lens distortion between the two modes in the central region. In contrast, a significant difference was observed in the blue region at the edges. Assuming that the lens distortion correction in the All mode is accurate, it is likely that the Part mode failed to correct the lens distortion at the edges, resulting in errors of more than 7 pixels.

Fig. 3. Lens distortion analysis: All mode, Part mode, and their difference (a–b).

3.2. Error Analysis Based on the Number of GCPs

In this study, to analyze the effect of the number of GCPs on the accuracy of orthomosaic generation, the experiment was conducted by gradually reducing the number of GCPs from 22 to 4. The final four retained GCPs were placed at the outer corners of the playground. This arrangement was designed based on a previous study, which suggested that placing 3 to 4 GCPs outside the target area and adding one at the center to form a centric polygonal network configuration is an effective strategy for improving accuracy (Kim et al., 2018). Additionally, this arrangement was made with the intention of maintaining the accuracy of the orthomosaic’s outer region while using the minimum number of GCPs. The final four retained GCPs are Nos. 1, 4, 18, and 21, as shown in Fig. 4.

Fig. 4. Location of GCPs.

According to previous studies, as the number of GCPs increases, positional accuracy improves (Shylesh et al., 2023). In particular, the number of GCPs has been reported to have a greater impact on vertical positional accuracy than on planar positional accuracy (Yun and Sung, 2018).

In this study, the error analysis was performed by gradually reducing the number of GCPs and converting the reduced GCPs into CPs. The analysis of positional and pixel errors according to the number of GCPs and CPs was performed in both the All and Part modes, and the results are shown in Fig. 5. In Fig. 5(a), an increasing trend in error was observed in both All mode and Part mode as the number of GCPs decreased. In All mode, the positional error of GCPs increased gradually, whereas Part mode exhibited slight variations. However, regardless of the number of GCPs, the positional error in All mode remained consistently lower than in Part mode, with an average difference of approximately 0.015 m.

Fig. 5. Comparison of GCP and CP positional and pixel errors between All mode and Part mode (x-axis: number of GCPs).

In Fig. 5(b), Part mode exhibits a clear pattern of rapidly increasing pixel error as the number of GCPs decreases. In contrast, in All mode, the pixel error remains relatively constant, with a more gradual increase in error. This indicates that Part mode, which has limited interior orientation parameters, is more sensitive to changes in the number of GCPs. The pixel error difference between All mode and Part mode averaged approximately 2.746 pixels.

In Fig. 5(c), the positional error of CPs shows significant variability in Part mode. While All mode also exhibited an increasing trend in error, the magnitude of the increase was notably smaller compared to Part mode. The CP positional error between All mode and Part mode ranged from 0.13 to 0.30 m, with an average difference of 0.22 m.

In Fig. 5(d), Part mode shows a tendency for CP pixel error to increase sharply as the number of GCPs decreases. In particular, the CP pixel error continued to increase significantly until the number of GCPs decreased to 15. However, when the number of GCPs decreased to 14 or fewer, a decreasing trend in error was observed. In contrast, in All mode, the CP pixel error remained relatively constant, showing a stable pattern that was not highly sensitive to changes in the number of GCPs. The difference in CP pixel error between All mode and Part mode averaged approximately 2.330 pixels.

In all the graphs, as the number of GCPs decreased, the increase in error in Part mode became more pronounced. This indicates that a reduction in the number of GCPs has a negative impact on orthomosaic accuracy. However, in All mode, the error either increased gradually or remained consistent, showing relatively stable accuracy. This suggests that All mode is less sensitive to changes in the number of GCPs compared to Part mode and can maintain stable accuracy even with fewer GCPs.

3.3. Error Analysis Based on GCP Positional Accuracy

Previous studies have shown that the quality of the GCP affects the positional accuracy of the orthomosaic (Lee et al., 2020). Furthermore, other studies have reported that RMSE values in the X, Y, and Z axes are lower when using high-accuracy GCPs compared to low-accuracy GCPs (Shylesh et al., 2023). These findings suggest that the accuracy of GCPs plays a critical role in determining the positional accuracy of UAV images.

Based on these research findings, this study analyzed the impact of GCP positional accuracy on UAV-based orthomosaic generation. To this end, the positional errors were compared between the precise coordinates of GCPs and the inaccurate coordinates generated by adding random offsets. The random offsets were set at seven levels: 0.03 m, 0.05 m, 0.07 m, 0.10 m, 0.20 m, 0.50 m, and 1.00 m. These values were randomly generated within a range of –n to +n (where n represents the offset level) using Kutools in Excel.

The analysis results showed that as the range of random offsets added to the GCP coordinates increased, the positional error of the GCPs showed a linear increase. This trend is clearly shown in the graph in Fig. 6. In particular, the R2 values for the All mode and the Part mode were 0.9995 and 0.9988, respectively, demonstrating a high degree of linearity close to 0.999.

Fig. 6. Positional errors of GCP coordinates with random noise input.

The regression equation for the All mode was calculated as y = 1.0653x, and for the Part mode, it was y = 1.072x. The slope difference between the two modes was 0.0067, which is very small, indicating almost no difference in GCP errors and showing similar trends. Additionally, it was observed that as the positional accuracy of the GCPs increased, the total error decreased, highlighting that precise management of GCP coordinates is essential for improving orthomosaic accuracy.

The directional differences in GCP positional errors between the All mode and the Part mode are shown in Table 10. The error differences in both the East and North directions were all below 0.003 m and remained very small. In the North direction, the error was particularly close to 0, indicating that the differences between the two modes were minimal. In the altitude direction, a maximum difference of 0.020m was observed, but this difference is considered negligible in UAV-based data analysis.

Table 10 Difference of GCP positional errors for All and Part mode

East errNorth errAlt errError (m)Error (pix)
Original0.0030.0010.0200.0162.616
0.030.001-0.0180.0142.616
0.050.002-0.0020.0012.616
0.070.0020.0020.0160.0122.616
0.10–0.001-–0.003–0.0032.616
0.200.001-0.0150.0082.615
0.500.0020.0010.0080.0062.611
1.000.0020.0010.0010.0022.616


The difference in total error was very low, with a maximum value of 0.016m, while the pixel error difference remained relatively constant at approximately 2.616 pixels in most cases. In all cases, the pixel error remained within 3 pixels, demonstrating stable accuracy.

In particular, the interior orientation parameter settings were found to have minimal impact on GCP positional errors, suggesting that in situations requiring rapid processing, omitting certain interior orientation parameters can still maintain a sufficient level of accuracy. This provides a significant foundation for generating reliable orthophotos even when processing time is reduced. Therefore, ensuring the accuracy of GCP coordinates is a crucial factor in UAV-based spatial data analysis and applications, and it is expected to enable more precise data-driven decisionmaking and applications.

In this study, modes were distinguished based on the interior orientation parameter settings to analyze lens distortion, and the impact of the number of GCPs and their positional accuracy on the accuracy of UAV orthophotos was evaluated.

The results showed that the All mode exhibited a low error, with Lat-Lon and XY errors of 0.051 m and 0.035 m, respectively, whereas the Part mode showed relatively higher errors of 0.078 m for Lat-Lon and 0.051 m for XY. This suggests that the settings of interior orientation parameters can influence the positional accuracy of orthoimages. The mean and standard deviation of Xradial and Yradial were higher in the Part mode than in the All mode, indicating greater variability in the Part mode. Additionally, the difference in Xtangential and Ytangential between the two modes was minimal, within 0.031. In both modes, the radial distortion was significantly greater than the tangential distortion. The difference between the All and Part modes in image coordinates was 0.160±1.347 pixels in the u coordinate and 0.076±0.991 pixels in the v coordinate, both showing a small discrepancy of less than two pixels.

To analyze the performance differences between the two modes, experiments were conducted by adjusting the number and positional accuracy of the GCPs. As the number of GCPs decreased, both the positional and pixel errors of the GCPs increased. The pixel error difference of GCPs between the All and Part modes averaged 2.746 pixels, while the pixel error difference of CPs averaged 2.330 pixels. This was similar to the difference in image coordinates previously calculated between the two modes.

As the positional accuracy of the GCP coordinates decreased, the GCP positional errors showed a tendency to increase linearly. The R2 values for the All and Part modes were 0.9995 and 0.9988, respectively, indicating a high degree of linearity, and the difference in slopes between the two modes was very small at 0.0067, resulting in nearly no difference in GCP errors. When comparing the positional errors of the GCPs by direction, except the case where a random offset of 0.10m level was added, the positional errors in the All mode were smaller than those in the Part mode in all other cases. The difference in pixel errors between the two modes averaged 2.615 pixels, which is similar to the previously calculated difference in image coordinates between the two modes.

This study comprehensively analyzed the settings of interior orientation parameters, the number of GCPs, and the positional accuracy of GCPs, systematically evaluating the impact of the interactions between these factors on the quality of UAV-based orthomosaics. When setting interior orientation parameters, it is essential to carefully select between All mode and Part mode based on the number of GCPs, project accuracy requirements, and processing time constraints. The results demonstrated that these appropriate interior orientation parameter settings, along with precise management of GCPs, play a crucial role in improving the positional accuracy of UAV orthomosaics. This approach distinguishes itself from previous studies, which primarily focused on the analysis of individual factors. This research is expected to serve as a valuable reference for future UAV-based surveying and remote sensing applications.

This study was conducted under a specific experimental environment (flight altitude of 70 m, DJI M3E sensor, and a main sports field), which may limit the generalization of the results to various sensors and environments. Future studies could expand experiments by considering factors such as aperture size, scene types (e.g., urban, forest, marine), and Instantaneous Field of View (IFOV). Additionally, since this study was conducted in a relatively flat area, the impact of GCP elevation differences was considered negligible. However, in areas with significant terrain variations, further investigation into the effects of GCP elevation changes on orthomosaic accuracy would be valuable.

This work was supported by a Research Grant of Pukyong National University (2023).

No potential conflict of interest relevant to this article was reported.

  1. Agisoft LLC, 2025. Agisoft metashape user manual (Professional edition, version 2.2). Available online: https://www.agisoft.com/pdf/metashape-pro_2_2_en.pdf (accessed on Jan. 13, 2025)
  2. Alemán-Flores, M., Alvarez, L., Gomez, L., and Santana-Cedrés, D., 2013. Wide-Angle lens distortion correction using division models. In: Ruiz-Shulcloper, J., Sanniti di Baja, G., (eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Springer, pp. 415-422. https://doi.org/10.1007/978-3-642-41822-8_52
  3. Beauchemin, S. S., and Bajcsy, R., 2001. Modelling and removing radial and tangential distortions in spherical lenses. In: Klette, R., Gimel'farb, G., Huang, T., (eds.), Multi-Image Analysis, Springer, pp. 1-21. https://doi.org/10.1007/3-540-45134-X_1
  4. DJI Mavic 3 Enterprise, 2024. DJI Enterprise. Available online: https://enterprise.dji.com/mavic-3-enterprise/specs (accessed on Dec. 23, 2024)
  5. Harwin, S., and Lucieer, A., 2012. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sensing, 4(6), 1573-1599. https://doi.org/10.3390/rs4061573
  6. Kang, J. A., Kim, B. G., and Park, J. M., 2008. The research for the wide-angle lens distortion correction by photogrammetry techniques. Journal of the Korean Socieyty of Surveying, Geodesy, Photogrammetry, and Cartography, 26(2), 103-110.
  7. Kang, J. A., Nam, S. K., Kim, T. H., and Oh, Y. S., 2009. The fisheye lens distortion correction of facilities monitoring CCTV. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, 27(3), 323-330.
  8. Kim, E. M., Sohn, H. G., and Kim, G. H., 2004. Determination of exterior orientation parameters using the projective transformation in close range photogrammetry. Journal of the Korean Society of Civil Engineers D, 24(3d), 463-467.
  9. Kim, J. S., and Hong, I. Y., 2020. Accuracy analysis of photogrammetry based on the layout of ground control points using UAV. Journal of the Korean Cartographic Association, 20(2), 41-55. https://doi.org/10.16879/jkca.2020.20.2.041
  10. Kim, Y. D., Park, B. W., and Lee, H. S., 2018. Accuracy analysis according to GCP layout type and flying height in orthoimage generation using low-cost UAV. Journal of the Korean Society for Geospatial Information Science, 26(3), 31-39. https://doi.org/10.7319/kogsis.2018.26.3.031
  11. Korea Meteorological Administration, 2024. Past weather observation by day. Available online: https://www.weather.go.kr/w/observation/land/past-obs/obs-by-day.do?stn=159&yy=2024&mm=11&obs=1 (accessed on Dec. 27, 2024)
  12. Lee, J. P., 2021. Quality assessment of digital surface model vertical position accuracies by ground control point location. Journal of Cadastre & Land InformatiX, 51(1), 125-136. https://doi.org/10.22640/LXSIRI.2021.51.1.125
  13. Lee, K., Han, Y., and Lee, W. H., 2018. Comparison of orthophotos and 3D models generated by UAV-based oblique images taken in various angles. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, 36(3), 117-126. https://doi.org/10.7848/ksgpc.2018.36.3.117
  14. Lee, Y. J., Park, H. J., Kim, H. S., and Kim, T. J., 2020. Analysis of geolocation accuracy of precision image processing system developed for CAS-500. Korean Journal of Remote Sensing, 36(5-2), 893-906. https://doi.org/10.7780/kjrs.2020.36.5.2.4
  15. Levenberg, K., 1944. A method for the solution of certain nonlinear problems in least squares. Quarterly of Applied Mathematics, 2, 164-168. https://doi.org/10.1090/qam/10666
  16. Moore, Z. E., Wright, D., Schinstock, D., and Lewis, C., 2009. Comparison of bundle adjustment formulations. In Proceedings of the 2009 ASPRS Annual Conference, Baltimore, MD, USA, Mar. 9-13, pp. 1-9.
  17. National Geographic Information Institute, 2025. Orthogonal imagery. Available online: https://www.ngii.go.kr/kor/content.do?sq=203 (accessed on Jan. 6, 2025)
  18. Nho, H. J., Shin, D. Y., Sohn, H. G., and Kim, S. S., 2020. Fast geocoding of UAV images for disaster site monitoring. Korean Journal of Remote Sensing, 36(5-4), 1221-1229. https://doi.org/10.7780/kjrs.2020.36.5.4.7
  19. Shylesh, D. S., Dharshan, Manikandan, N., Sivasankar, Surendran, D., Jaganathan, R., and Mohan, G., 2023. Influence of quantity, quality, horizontal and vertical distribution of ground control points on the positional accuracy of UAV survey. Applied Geomatics, 15, 897-917. https://doi.org/10.1007/s12518-023-00531-w
  20. Triggs, B., McLauchlan, P. F., Hartley, R. I., and Fitzgibbon, A. W., 2000. Bundle adjustment - A modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R., (eds.), Vision Algorithms: Theory and Practice (IWVA 1999), Springer, pp. 298-372. https://doi.org/10.1007/3-540-44480-7_21
  21. Yoo, Y. H., Choi, J. W., Choi, S. K., and Jung, S. H., 2016. Quality evaluation of orthoimage and DSM based on fixed-wing UAV corresponding to overlap and GCPs. Journal of Korean Society for Geospatial Information Science, 24(3), 3-9. https://doi.org/10.7319/kogsis.2016.24.3.003
  22. Yun, B. Y., and Sung, S. M., 2018. Location accuracy of unmanned aerial photogrammetry results according to change of number of ground control points. Journal of the Korean Association of Geographic Information Studies, 21(2), 24-33. https://doi.org/10.11108/KAGIS.2018.21.2.024
  23. Yun, B. Y., and Yoon, W. S., 2018. A study on the improvement of orthophoto accuracy according to the flight photographing technique and GCP location distance in orthophoto generation using UAV. Journal of the Korean Society of Industry Convergence, 21(6), 345-354. https://doi.org/10.21289/KSIC.2018.21.6.345

Research Article

Korean J. Remote Sens. 2025; 41(1): 87-100

Published online February 28, 2025 https://doi.org/10.7780/kjrs.2025.41.1.8

Copyright © Korean Society of Remote Sensing.

Analyzing the Impact of Interior Orientation Parameter Settings, the Number of GCPs, and GCP Positional Accuracy on Orthomosaic Quality

Chansol Kim1 , Seungchan Lim1, Donggyu Kim2, Hohyun Jeong3 , Chuluong Choi4*

1Master Student, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
2Undergraduate Student, Major of Geomatics Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea
3Senior Researcher, Spatial Information Research Institute, LX, Wanju, Republic of Korea
4Professor, Major of Spatial Information Engineering, Division of Earth and Environmental System Sciences, Pukyong National University, Busan, Republic of Korea

Correspondence to:Chuluong Choi
E-mail: cuchoi@pknu.ac.kr

Received: February 5, 2025; Revised: February 17, 2025; Accepted: February 14, 2025

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Unmanned Aerial Vehicle (UAV) orthomosaics are widely used in various fields, including construction, environmental monitoring, and real estate, and their quality is influenced by the accuracy of interior orientation. In this study, the modes were divided into “All mode” and “Part mode” based on the interior orientation parameter settings, and lens distortion was compared between the two modes. Additionally, the effects of the number of Ground Control Points (GCPs) and their positional accuracy on the positional accuracy of UAV-based orthomosaics were evaluated for each mode. The Part mode, which applies only a subset of interior orientation parameters, exhibited greater LatLon and XY errors compared to the All mode, which applies all parameters. Using custom Python scripts, lens distortion was compared between the two modes, and the image coordinate deviations were found to be 0.160±1.347 pixels in u (image x) and 0.076±0.991 pixels in v (image y), both of which were below 2 pixels. As the number of GCPs decreased, both modes exhibited an increasing trend in GCP positional error. In terms of GCP and CP pixel errors, the All mode demonstrated a lower and more consistent error compared to the Part mode, as it was less sensitive to changes in the number of GCPs. The addition of a random offset to the GCP coordinates to vary the GCP positional accuracy showed that as the magnitude of the added offset increased, the GCP positional error exhibited a linear increase. These findings suggest that the setting of interior orientation parameters and GCP management are critical factors in determining the accuracy of UAV orthomosaics. This study is expected to provide valuable foundational data for analyzing error factors in the UAV-based orthomosaic creation process, which can be utilized in both practical and research settings.

Keywords: Unmanned aerial vehicle, Orthomosaic, Ground control point, Lens distortion, Radial distortion, Tangential distortion, Positional error

1. Introduction

Orthomosaics are an integral part of geospatial information acquisition and analysis using Unmanned Aerial Vehicles (UAVs). They are widely used in a variety of fields, including construction, environmental management, and real estate (National Geographic Information Institute, 2025), and have established themselves as indispensable tools, particularly in applications requiring highprecision spatial data.

However, during the production of orthomosaics, various factors such as interior orientation parameters, the number of Ground Control Points (GCPs), and the positional accuracy of the GCPs can introduce X, Y, and pixel errors that impact the positional accuracy of the orthomosaics. These errors not only degrade the quality of the seamlines in the orthomosaics but also negatively impact the reliability of the analysis results based on them (Harwin and Lucieer, 2012).

Previous studies have examined the influence of GCP placement patterns and flight altitude on orthomosaic accuracy (Kim et al., 2018; Kim and Hong, 2020). Additionally, research has investigated the effects of image overlap and the number of GCPs on orthomosaic quality (Yoo et al., 2016). Furthermore, previous findings indicate that the positioning of GCPs and the distance between GCPs and Check Points (CPs) significantly affect accuracy (Yun and Yoon, 2018; Lee, 2021).

One of the key factors affecting the accuracy of orthomosaics is interior orientation errors, which distort the precise correspondence between images and the actual terrain, leading to a decrease in the positional accuracy of the orthomosaic. This issue becomes more pronounced in areas with significant terrain distortion. Interior orientation errors cause misalignment between aerial images during the orthomosaic generation process, resulting in visible seamlines that degrade image quality. These visual discrepancies along the seamlines can confuse users, making the interpretation and utilization of the orthomosaic more challenging.

However, studies that systematically and quantitatively analyze the impact of interior orientation parameters on orthomosaic positional accuracy are still limited. Furthermore, research proposing efficient and optimized methodologies that can be applied in practical settings is scarce.

To address this limitation, this study analyzes error factors from three main aspects. First, to evaluate the impact of interior orientation parameters, the lens distortion in the All mode, which applies all interior orientation parameters, and the Part mode, which applies only a subset, was compared. For this analysis, a custom Python script was used. Second, to assess the effect of the number of GCPs, the number of GCPs was gradually reduced from 22 to 4, and the reduced GCPs were converted into CPs for comparison of positional error and pixel error. Third, to analyze the effect of GCP positional accuracy, random offsets ranging from 0.03 m to 1.00 m were generated in 7 steps and added to each GCP coordinate. The positional errors of the GCPs were then compared and analyzed for each mode.

This study systematically analyzes the error factors that arise during the orthomosaic generation process using UAVs, taking into account the settings of interior orientation parameters. In doing so, it aims to propose a method that improves both the precision and efficiency of orthomosaics. The findings of this study are expected to enhance the applicability of UAV imagery data and provide a foundation for improving the reliability and efficiency of orthomosaics in both practical and research settings.

This paper is structured as follows. Chapter 2 discusses the theoretical background and methodology of the study. Chapter 3 analyzes the errors based on the settings of interior orientation parameters, the number of GCPs, and GCP positional accuracy, and presents the key findings derived from this analysis. Finally, Chapter 4 discusses the conclusions and implications of the study.

2. Materials and Methods

2.1. Data Acquisition and Methodology

The study was conducted on 11 November 2024 at the main sports field of Pukyong National University’s Yongdang Campus, located at 365, Sinseon-ro, Nam-gu, Busan, Republic of Korea. The workflow of this study is presented in Fig. 1. Two flight paths were planned using DJI Flighthub2, one in the north-south (N–S) direction and the other in the east-west (E–W) direction. The flight altitude was maintained at 70±0.04 m, with both overlap and sidelap set to 90%. The interval between shots was between 5.1 and 5.3 seconds. The total flight distances were 843.0 m for the N-S direction and 895.0 m for the E-W direction.

Figure 1. Flowchart of this study.

Each flight lasted 557 and 575 seconds, capturing 105 photos in the north-south direction and 111 photos in the east-west direction. The flight speed was maintained between 1.47 and 1.61 m/s, significantly lower than the maximum flight speed of the DJI Mavic 3 Enterprise (M3E), which is 15 m/s (Lee et al., 2018; DJI Mavic 3 Enterprise, 2024), to ensure precision photography. The study area covered 7,578.8 m2, with an input Ground Sampling Distance (GSD) of 2.00 cm/pixel and an output GSD of 2.09 cm/pixel. Detailed flight information is provided in Table 1.

Table 1 . Comparison of UAV flight data and camera settings by flight path.

ParameterN-S DirectionE-W Direction
Start time15:33:3115:45:17
End time15:42:4815:54:52
Flight time00:09:1700:09:35
Photo count105111
Flight height (m)7070
Number of strips612
F-stop3.2–4.52.8–4.0
Overlap (%)9090
Sidelap (%)9090
ISO100–110100–110
Shutter speed1/4001/400
Image quality (Mean±SD)0.855±0.0230.863±0.019

ISO: International Standard Organization, SD: Standard Deviation..



A total of 23 grid-shaped GCPs, each measuring 50 × 50 cm, were evenly distributed within the study area. However, due to damage to GCP No. 3 during the experiment, only 22 GCPs were ultimately used. Image acquisition was carried out using the DJI M3E, as shown in Fig. 2(a), by uploading the pre-designed flight path to its controller.

Figure 2. UAV and used equipment in this study.

The DJI M3E is a high-performance compact UAV designed for professional applications such as surveying and mapping. Equipped with a 20 MP high-resolution camera and a Real-Time Kinematic (RTK) module, it is capable of collecting spatial data with centimeter-level precision. Notably, it offers a long flight time of up to 45 minutes and a maximum transmission range of 15 km, ensuring high operational efficiency over large areas. The main specifications of the M3E are shown in Table 2.

Table 2 . Specification of Mavic 3 Enterprise (M3E).

General SpecificationValueCamera SpecificationValue
Max speed (m/s)15Focal length (mm)12.29 (24/35 mm)
Max flight Time (min)45F-stopf/2.8–f/11
GSD (cm)2.86/100 × FH(m)ISO100–6400
Image size5280 × 3956 (20MP)Shutter speed1/2000-8
Field of view (°)84Shutter typeMechanical
Sensor4/3CMOSCCD size (mm)17.73 × 13.29


The weather conditions during image acquisition were clear and sunny, with an average temperature of 17.3°C, a maximum of 23.1°C, a minimum of 13.0°C, and an average cloud cover of 1.3, with no precipitation (Korea Meteorological Administration, 2024).

GCP coordinate surveying was performed using the iM-55 total station and GRX2 Global Navigation Satellite System (GNSS) receiver from Sokkia, as shown in Figs. 2(b, c). The main specifications of the GRX2 GNSS receiver are presented in Table 3. A prism was installed at the center of each GCP, and the distance and angles from the Base GCP were measured using a total station. To ensure the reliability of the measured data, total station surveys for GCPs No. 1 to No. 20 were conducted twice, and the average values of the measurements were utilized. In addition, the coordinates of 14 GCPs, including the Base GCP and GCP No. 0, were measured using a GNSS receiver. The Base GCP coordinates were designated as the reference for calculating GCP coordinates, and the remaining 21 GCP coordinates were determined using the distance and angle data obtained from the total station.

Table 3 . Specification of GRX2.

SpecificationValue
Tracked SignalsGPS, GLONASS, SBAS
Number of Channels226
Positioning Accuracy (L1+L2)TypeHorizontalVertical
Static3mm + 0.5 ppm5mm + 0.5 ppm
Fast Static3mm + 0.5 ppm5mm + 0.5 ppm
Kinematic10mm + 1 ppm15mm + 1 ppm
RTK10mm + 1 ppm15mm + 1 ppm
Positioning AccuracyDGPS: < 0.5m
Update/Output rate1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard)
Physical SpecificationsSize: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.)


The GCP coordinates calculated using the data measured with the total station and GNSS receiver are presented in Table 4. The iM-55 is a high-precision surveying instrument with an angular measurement accuracy of 5 seconds and a distance measurement accuracy of (1.5 + 2 ppm × measurement distance) mm. As shown in Table 5, the survey results indicated that the relative positions of the GCPs had standard deviations of X: 2 mm, Y: 4 mm, and Z: 2 mm. The GRX2 is a 226-channel receiver capable of receiving GPS, GLONASS, and SBAS signals. The GCP coordinates surveyed using the GRX2 showed absolute position standard deviations of X: 33 mm, Y: 66 mm, and Z: 22 mm relative to the baseline.

Table 4 . Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m).

No.Total StationGNSSEasting (X)Northing (Y)Height (Z)SD (X)SD (Y)SD (Z)
Base11208,213.370280,231.334110.427
011208,171.243280,161.693110.419
12208,186.660280,131.747110.3450.0040.003
221208,167.470280,143.805110.3820.0230.0190.008
42208,134.579280,164.153110.3830.0040.002
52208,153.061280,193.443110.3600.001
621208,172.131280,181.181110.5070.0130.0130.013
721208,188.867280,171.596110.4540.0060.0050.009
82208,205.180280,160.774110.3320.0030.0120.003
92208,216.832280,180.520110.3470.0040.002
1021208,209.429280,182.120110.4010.0130.0130.016
1121208,200.048280,189.952110.4620.0140.0120.040
1221208,184.312280,202.069110.5270.0120.0060.013
132208,165.218280,212.792110.3630.004
142208,172.761280,225.059110.3670.0030.0070.002
1521208,193.366280,216.145110.5120.0150.0060.009
1621208,210.515280,207.497110.4640.0040.0130.035
1721208,228.174280,197.557110.3320.0040.0240.034
182208,242.606280,221.518110.4000.0060.000
1921208,224.028280,230.563110.4160.0080.0040.022
2021208,208.466280,240.887110.4180.0220.0040.028
2121208,191.048280,253.412110.3900.0230.0100.031


Table 5 . Standard deviation of GCP coordinates measured using total station and GNSS receiver.

InstrumentCountSD (X)SD (Y)SD (Z)
Total Station (GCP)420.0020.0040.002
GNSS Receiver (Baseline)140.0330.0660.022


The captured UAV images were processed in the photogrammetry software Metashape, undergoing tie point generation and alignment procedures. The image quality was assessed using Metashape’s image quality estimation function. Image quality was assessed using Metashape’s image quality estimation function. The quality values of the images used in this study were found to be between 0.81 and 0.92, which are calculated based on the sharpness of the images. Generally, images with quality values below 0.5 are recommended for exclusion from processing (Agisoft LLC, 2025). All 216 images used in this study had quality values of 0.8 or higher, indicating high quality, with no further exclusions required.

During the image alignment process, the interior and exterior orientation parameters were calculated, and the relative positional relationships between images were optimized through bundle adjustment. Errors were then analyzed based on the interior orientation parameter settings, the number of GCPs, and the accuracy of GCP coordinates. This analysis aimed to quantitatively assess the error factors that may arise during the production of UAV-based orthomosaic and to provide foundational data for enhancing the accuracy of the results.

2.2. Theoretical Background

In UAV photogrammetry, lens distortion correction algorithms exhibit similar characteristics across different software, but they are not entirely identical. All models in Metashape assume a central projection camera and nonlinear distortions are modeled based on the Brown-Conrady lens distortion model.

The local camera coordinate system is defined with the camera’s projection center as the origin. In this system, the Z-axis points in the direction of the camera’s line of sight, the X-axis points horizontally to the right, and the Y-axis points vertically downward. Conversely, the image coordinate system uses the pixel center at the top-left corner of the image frame as its origin, with coordinates expressed in pixel units. These coordinate system definitions are essential for accurately modeling the camera’s interior orientation parameters and distortion corrections.

In this study, after generating tie points, the internal orientation parameters were calculated using the Agisoft Metashape software, and the interior orientation simulation was performed using a custom Python script. Interior orientation refers to the process of defining and calibrating the internal parameters of the optical system (camera), which is a critical step for establishing the precise relationship between digital images and real-world space. The interior orientation process involves determining the internal parameters of the optical system and correcting distortions. These parameters include the focal length (f), principal point offset (cx, cy), radial distortion coefficients (k1, k2, k3, k4), tangential distortion coefficients (p1, p2), and skew coefficients (affinity and non-orthogonality: b1, b2).

The Brown–Conrady model allows for the correction of both radial and tangential distortions (Kang et al., 2008). Radial distortion refers to the distortion that occurs radially outward from the lens center, while tangential distortion refers to distortion occurring in the tangential direction, perpendicular to the radial lines from the lens center (Kang et al., 2009). Generally, tangential distortion is significantly smaller compared to radial distortion (Beauchemin and Bajcsy, 2001). Accurate camera calibration is essential for transforming image coordinates into physical 3D spatial coordinates. Interior orientation parameters provide the foundational data required for exterior orientation, facilitating precise alignment and positional calculations between images.

Eq. (1) is a formula used to calculate the radius (r) from the principal point for each pixel. The radius (r) serves as a variable in the radial and tangential distortion correction equations, Eqs. (2) and (3).

r=x2+y2
x'=x1+k1r2+k2r4+k3r6+k4r8+p1r2+2x2+2p2xy
y'=y1+k1r2+k2r4+k3r6+k4r8+p2r2+2y2+2p1xy

Where r2, r4, r6, and r8 correspond to the squared, fourth, sixth, and eighth powers of the radius, respectively, and represent the distortion correction terms for each order. Typically, only k1 and k2 are used for radial distortion correction; however, depending on the degree of distortion, k3 and k4 may also be applied. The distortion-corrected coordinates (x′, y′) are transformed from the camera image plane to the final projected point coordinates in the image coordinate system (in pixels: u, v). The conversion formulas, shown in Equations (4) and (5), account for distortions caused by irregular lens rotation (b1, b2) or principal point offset (cx, cy).

u=w×0.5+cx+x'f+x'b1+y'b2
v=h×0.5+cy+y'f

Descriptions of the variables used in the equations are provided in Table 6. Metashape utilizes the precise coordinates of GCPs to transform camera position and rotation information into the ground coordinate system. The positional error is calculated based on the differences between the actual coordinates of the GCPs and CPs (Xmeasured, i), Ymeasured, i, Zmeasured, i) and the predicted coordinates (Xpredicted, i), Ypredicted, i, Zpredicted, i). Eq. (6) represents the Root Mean Square Error (RMSE) formula used to calculate the positional error for GCPs and CPs, where n denotes the total number of GCPs and CPs.

Table 6 . Description of the camera calibration parameters.

ParameterDescriptionParameterDescription
X, Y, ZPoint coordinates in the local camera coordinate systemw, hImage width and height (in pixels)
x, yX/Z, Y/Zx’, y’Divide Focal length corrected lens distortion
X0, Y0, Z0Camera center coordinatesrijRotation matrix elements (ω, ø, κ)


RMSEtotal=1/n i=1nXmeasured,iXpredicted,i2+Ymeasured,iYpredicted,i2+Zmeasured,iZpredicted,i2

Metashape fundamentally calculates the camera’s exterior orientation parameters based on the collinearity equation. The collinearity condition states that a point on the ground, its corresponding point on the image, and the camera’s projection center must all lie in the same straight line. Eqs. (7) and (8) define the relationship between image coordinates and ground coordinates based on the interior and exterior orientation parameters (Kim et al., 2004).

x'=fr11XX0+r12YY0+r13ZZ0r31XX0+r32YY0+r33ZZ0+Xc
y'=fr21XX0+r22YY0+r23ZZ0r31XX0+r32YY0+r33ZZ0+Yc

Bundle adjustment is an optimization algorithm used in photogrammetry to simultaneously adjust camera parameters (both interior and exterior) and GCP coordinates to achieve optimal results (Moore et al., 2009; Triggs et al., 2000). This algorithm minimizes errors by utilizing multiple images. Optimization is performed using the nonlinear least squares method, specifically the Levenberg-Marquardt algorithm, based on image feature matching data that includes ground control points (Levenberg, 1944). The bundle adjustment equation is presented in Equation (9), where measuredij represents the observed image coordinates, and projectedij represents the image coordinates estimated by the model.

ijmeasuredij projectedij 2

3. Results and Discussion

3.1. Error Analysis of Interior Orientation Parameter Settings

The amount of lens distortion varies depending on the field of view. Especially when wide angle lenses are used for UAV aerial imaging, lens distortion significantly affects the accuracy of the image, making the proper configuration of interior orientation parameters essential (Alemán-Flores et al., 2013). In this study, experiments were conducted in two modes, All mode and Part mode, to analyze the errors due to different interior orientation parameter settings.

The All mode optimizes all interior orientation parameters to achieve the highest possible accuracy. In contrast, the Part mode selectively optimizes only certain parameters, excluding k4, b1, and b2. This mode corresponds to the default setting in UAV photogrammetry software such as Agisoft Metashape and Pix4D Mapper, aiming to enhance data processing efficiency while focusing only on essential parameters for modeling. The interior orientation parameter values for each mode are presented in Table 7.

Table 7 . Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes.

ParameterAllPart
ValueErrorValueError
Focal lengthf3702.098750.0623023705.23210.06365
Principal point offsetcx27.00520.0198926.8760.017711
cy-3.636470.020407-3.711580.017975
Skew coefficientsb10.06406830.00229
b2-0.1783950.0023
Radial distortion coefficientsk1-0.07513820.0000267-0.09324750.0000165
k2-0.1562280.000125-0.05220550.0000538
k30.2590750.000230.03686720.0000559
k4-0.1566880.000142
Tangential distortion coefficientsp10.00002194640.0000007448.53919e-060.000000656
p2-0.00005543430.000000712-0.00005624570.000000622
Lat Lon0.0510.078
XY0.0350.051


Previous studies have analyzed RMSE values under different interior orientation parameter settings (Nho et al., 2020). Comparing Case 1 (Exterior Orientation Parameter [EOP], f, cx, cy, k1, k2) and Case 2 (EOP, f, cx, cy) showed that Case 2 had a lower RMSE by 0.077 m on the x-axis and 0.063 m on the y-axis. These results suggest that excluding radial distortion coefficients may improve accuracy. However, this effect is primarily observed in cameras with minimal radial distortion or when in-camera distortion correction is applied. Furthermore, the difference was not statistically significant.

Based on this, this study conducted a more detailed comparison of errors according to interior orientation parameter settings. In All mode, the Lat Lon error was 0.051m and the XY error was 0.035m, showing a low level of error. In contrast, the Part mode showed higher errors, with a Lat Lon error of 0.078 m and an XY error of 0.051 m. These results indicate that the range of interior orientation parameter settings affects the accuracy of the data.

To analyze the differences between All mode and Part mode in more detail, a custom Lens Distortion Simulation Python script was used to calculate the radial distortion, tangential distortion, and total lens distortion for each mode. These values were calculated based on the interior orientation parameters. The calculated results are presented in Table 8, while the differences in lens distortion and final coordinates (u, v) between the two modes are shown in Table 9.

Table 8 . Comparison of lens, radial, and tangential distortion between All mode and Part mode.

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
AllMin0.0000.000-0.026-0.342172.33553.9340.0000.0000.000
Max239.558179.4870.3040.0005161.9233894.441299.787299.3390.457
Average50.63333.5910.050-0.0962667.0551974.26864.06464.0640.115
Standard deviation53.28934.8780.0660.0661475.9481113.92960.36760.3670.084
PartMin0.0000.000-0.101-0.308174.68455.3730.0000.0000.000
Max233.251174.7610.2160.0005159.1653892.848291.815291.4570.376
Average51.70834.4000.020-0.0972666.8961974.19165.51965.5180.108
Standard deviation53.73435.1270.0560.0641474.6961113.00660.71060.7100.073


Table 9 . Differences in lens, radial, and tangential distortion between All mode and Part mode.

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
Min–2.704–2.0250.000–0.034–6.131–4.685–2.958–2.875–0.061
Max6.3074.7260.0920.0426.5744.8437.9807.8810.081
Average–1.075–0.8080.0310.0010.1600.076–1.455–1.4550.008
Standard deviation0.7690.5740.0240.0131.3470.9910.7830.7820.026


The equations for calculating radial and tangential distortion in the X and Y directions are presented in Eqs. (2) and (3). The average values of Xradial and Yradial were 50.633 and 33.591, respectively, in All mode, while in Part mode, they were 51.708 and 34.400 showing slightly higher values than in All mode.

The standard deviations were ±53.289, ±34.878 in All mode and ±53.734, ±35.127 in Part mode, indicating slightly greater variability in Part mode. This suggests that lens distortion correction in the Part mode may be somewhat less consistent.

These results indicate that setting the radial distortion coefficient affects both the lens distortion correction and its variability. Both modes exhibited an increasing radial distortion trend with increasing distance from the center, suggesting that the configuration of the interior orientation parameters has little effect on the overall radial distortion pattern.

In the All mode, the mean values of Xtangential and Ytangential were 0.050 and –0.096, respectively, while in the Part mode, they were 0.020 and –0.097. In both modes, tangential distortion was significantly smaller than radial distortion.

The skew coefficient is a variable that adjusts the correlation of the slopes between coordinates and is used to calculate the image coordinate u. It was, therefore, expected that the skew coefficient would have a significant effect on the image coordinates. The results of this study indicate that its effect is minimal. The mean values and standard deviations of the image coordinates (u, v) in the All mode were slightly higher compared to the Part mode. However, this difference is not statistically or practically significant, and it is judged that omitting the skew coefficient would not have a substantial impact on the results when the camera’s tilt is not large. As shown in Table 9, the difference between the All mode and the Part mode was 0.160±1.347 pixels for the u coordinate and 0.076±0.991 pixels for the v coordinate, both of which were at a very small level of less than 2 pixels. This suggests that the skew factor has a very limited effect on the calculation of the image coordinates.

Lens distortion includes both radial and tangential distortion, and the average lens distortion values for the All mode and the Part mode were 64.064 and 65.519, respectively, indicating similar levels. However, in terms of maximum values, the All mode showed 299.787, while the Part mode showed 291.815, a difference of 7.972 pixels. Converted to GSD 2.09 cm/pixel, the maximum error in Part mode is approximately 16.66 cm. However, this error only occurs in a few pixels at the four corners of the image, and most pixels have an error of less than 3 pixels.

The distribution of lens distortion and the difference between the two modes in the All mode and the Part mode are shown in Fig. 3. The central part of the difference graph shows values close to 0, indicating that there is almost no difference in lens distortion between the two modes in the central region. In contrast, a significant difference was observed in the blue region at the edges. Assuming that the lens distortion correction in the All mode is accurate, it is likely that the Part mode failed to correct the lens distortion at the edges, resulting in errors of more than 7 pixels.

Figure 3. Lens distortion analysis: All mode, Part mode, and their difference (a–b).

3.2. Error Analysis Based on the Number of GCPs

In this study, to analyze the effect of the number of GCPs on the accuracy of orthomosaic generation, the experiment was conducted by gradually reducing the number of GCPs from 22 to 4. The final four retained GCPs were placed at the outer corners of the playground. This arrangement was designed based on a previous study, which suggested that placing 3 to 4 GCPs outside the target area and adding one at the center to form a centric polygonal network configuration is an effective strategy for improving accuracy (Kim et al., 2018). Additionally, this arrangement was made with the intention of maintaining the accuracy of the orthomosaic’s outer region while using the minimum number of GCPs. The final four retained GCPs are Nos. 1, 4, 18, and 21, as shown in Fig. 4.

Figure 4. Location of GCPs.

According to previous studies, as the number of GCPs increases, positional accuracy improves (Shylesh et al., 2023). In particular, the number of GCPs has been reported to have a greater impact on vertical positional accuracy than on planar positional accuracy (Yun and Sung, 2018).

In this study, the error analysis was performed by gradually reducing the number of GCPs and converting the reduced GCPs into CPs. The analysis of positional and pixel errors according to the number of GCPs and CPs was performed in both the All and Part modes, and the results are shown in Fig. 5. In Fig. 5(a), an increasing trend in error was observed in both All mode and Part mode as the number of GCPs decreased. In All mode, the positional error of GCPs increased gradually, whereas Part mode exhibited slight variations. However, regardless of the number of GCPs, the positional error in All mode remained consistently lower than in Part mode, with an average difference of approximately 0.015 m.

Figure 5. Comparison of GCP and CP positional and pixel errors between All mode and Part mode (x-axis: number of GCPs).

In Fig. 5(b), Part mode exhibits a clear pattern of rapidly increasing pixel error as the number of GCPs decreases. In contrast, in All mode, the pixel error remains relatively constant, with a more gradual increase in error. This indicates that Part mode, which has limited interior orientation parameters, is more sensitive to changes in the number of GCPs. The pixel error difference between All mode and Part mode averaged approximately 2.746 pixels.

In Fig. 5(c), the positional error of CPs shows significant variability in Part mode. While All mode also exhibited an increasing trend in error, the magnitude of the increase was notably smaller compared to Part mode. The CP positional error between All mode and Part mode ranged from 0.13 to 0.30 m, with an average difference of 0.22 m.

In Fig. 5(d), Part mode shows a tendency for CP pixel error to increase sharply as the number of GCPs decreases. In particular, the CP pixel error continued to increase significantly until the number of GCPs decreased to 15. However, when the number of GCPs decreased to 14 or fewer, a decreasing trend in error was observed. In contrast, in All mode, the CP pixel error remained relatively constant, showing a stable pattern that was not highly sensitive to changes in the number of GCPs. The difference in CP pixel error between All mode and Part mode averaged approximately 2.330 pixels.

In all the graphs, as the number of GCPs decreased, the increase in error in Part mode became more pronounced. This indicates that a reduction in the number of GCPs has a negative impact on orthomosaic accuracy. However, in All mode, the error either increased gradually or remained consistent, showing relatively stable accuracy. This suggests that All mode is less sensitive to changes in the number of GCPs compared to Part mode and can maintain stable accuracy even with fewer GCPs.

3.3. Error Analysis Based on GCP Positional Accuracy

Previous studies have shown that the quality of the GCP affects the positional accuracy of the orthomosaic (Lee et al., 2020). Furthermore, other studies have reported that RMSE values in the X, Y, and Z axes are lower when using high-accuracy GCPs compared to low-accuracy GCPs (Shylesh et al., 2023). These findings suggest that the accuracy of GCPs plays a critical role in determining the positional accuracy of UAV images.

Based on these research findings, this study analyzed the impact of GCP positional accuracy on UAV-based orthomosaic generation. To this end, the positional errors were compared between the precise coordinates of GCPs and the inaccurate coordinates generated by adding random offsets. The random offsets were set at seven levels: 0.03 m, 0.05 m, 0.07 m, 0.10 m, 0.20 m, 0.50 m, and 1.00 m. These values were randomly generated within a range of –n to +n (where n represents the offset level) using Kutools in Excel.

The analysis results showed that as the range of random offsets added to the GCP coordinates increased, the positional error of the GCPs showed a linear increase. This trend is clearly shown in the graph in Fig. 6. In particular, the R2 values for the All mode and the Part mode were 0.9995 and 0.9988, respectively, demonstrating a high degree of linearity close to 0.999.

Figure 6. Positional errors of GCP coordinates with random noise input.

The regression equation for the All mode was calculated as y = 1.0653x, and for the Part mode, it was y = 1.072x. The slope difference between the two modes was 0.0067, which is very small, indicating almost no difference in GCP errors and showing similar trends. Additionally, it was observed that as the positional accuracy of the GCPs increased, the total error decreased, highlighting that precise management of GCP coordinates is essential for improving orthomosaic accuracy.

The directional differences in GCP positional errors between the All mode and the Part mode are shown in Table 10. The error differences in both the East and North directions were all below 0.003 m and remained very small. In the North direction, the error was particularly close to 0, indicating that the differences between the two modes were minimal. In the altitude direction, a maximum difference of 0.020m was observed, but this difference is considered negligible in UAV-based data analysis.

Table 10 . Difference of GCP positional errors for All and Part mode.

East errNorth errAlt errError (m)Error (pix)
Original0.0030.0010.0200.0162.616
0.030.001-0.0180.0142.616
0.050.002-0.0020.0012.616
0.070.0020.0020.0160.0122.616
0.10–0.001-–0.003–0.0032.616
0.200.001-0.0150.0082.615
0.500.0020.0010.0080.0062.611
1.000.0020.0010.0010.0022.616


The difference in total error was very low, with a maximum value of 0.016m, while the pixel error difference remained relatively constant at approximately 2.616 pixels in most cases. In all cases, the pixel error remained within 3 pixels, demonstrating stable accuracy.

In particular, the interior orientation parameter settings were found to have minimal impact on GCP positional errors, suggesting that in situations requiring rapid processing, omitting certain interior orientation parameters can still maintain a sufficient level of accuracy. This provides a significant foundation for generating reliable orthophotos even when processing time is reduced. Therefore, ensuring the accuracy of GCP coordinates is a crucial factor in UAV-based spatial data analysis and applications, and it is expected to enable more precise data-driven decisionmaking and applications.

4. Conclusions

In this study, modes were distinguished based on the interior orientation parameter settings to analyze lens distortion, and the impact of the number of GCPs and their positional accuracy on the accuracy of UAV orthophotos was evaluated.

The results showed that the All mode exhibited a low error, with Lat-Lon and XY errors of 0.051 m and 0.035 m, respectively, whereas the Part mode showed relatively higher errors of 0.078 m for Lat-Lon and 0.051 m for XY. This suggests that the settings of interior orientation parameters can influence the positional accuracy of orthoimages. The mean and standard deviation of Xradial and Yradial were higher in the Part mode than in the All mode, indicating greater variability in the Part mode. Additionally, the difference in Xtangential and Ytangential between the two modes was minimal, within 0.031. In both modes, the radial distortion was significantly greater than the tangential distortion. The difference between the All and Part modes in image coordinates was 0.160±1.347 pixels in the u coordinate and 0.076±0.991 pixels in the v coordinate, both showing a small discrepancy of less than two pixels.

To analyze the performance differences between the two modes, experiments were conducted by adjusting the number and positional accuracy of the GCPs. As the number of GCPs decreased, both the positional and pixel errors of the GCPs increased. The pixel error difference of GCPs between the All and Part modes averaged 2.746 pixels, while the pixel error difference of CPs averaged 2.330 pixels. This was similar to the difference in image coordinates previously calculated between the two modes.

As the positional accuracy of the GCP coordinates decreased, the GCP positional errors showed a tendency to increase linearly. The R2 values for the All and Part modes were 0.9995 and 0.9988, respectively, indicating a high degree of linearity, and the difference in slopes between the two modes was very small at 0.0067, resulting in nearly no difference in GCP errors. When comparing the positional errors of the GCPs by direction, except the case where a random offset of 0.10m level was added, the positional errors in the All mode were smaller than those in the Part mode in all other cases. The difference in pixel errors between the two modes averaged 2.615 pixels, which is similar to the previously calculated difference in image coordinates between the two modes.

This study comprehensively analyzed the settings of interior orientation parameters, the number of GCPs, and the positional accuracy of GCPs, systematically evaluating the impact of the interactions between these factors on the quality of UAV-based orthomosaics. When setting interior orientation parameters, it is essential to carefully select between All mode and Part mode based on the number of GCPs, project accuracy requirements, and processing time constraints. The results demonstrated that these appropriate interior orientation parameter settings, along with precise management of GCPs, play a crucial role in improving the positional accuracy of UAV orthomosaics. This approach distinguishes itself from previous studies, which primarily focused on the analysis of individual factors. This research is expected to serve as a valuable reference for future UAV-based surveying and remote sensing applications.

This study was conducted under a specific experimental environment (flight altitude of 70 m, DJI M3E sensor, and a main sports field), which may limit the generalization of the results to various sensors and environments. Future studies could expand experiments by considering factors such as aperture size, scene types (e.g., urban, forest, marine), and Instantaneous Field of View (IFOV). Additionally, since this study was conducted in a relatively flat area, the impact of GCP elevation differences was considered negligible. However, in areas with significant terrain variations, further investigation into the effects of GCP elevation changes on orthomosaic accuracy would be valuable.

Acknowledgments

This work was supported by a Research Grant of Pukyong National University (2023).

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Fig 1.

Figure 1.Flowchart of this study.
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Fig 2.

Figure 2.UAV and used equipment in this study.
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Fig 3.

Figure 3.Lens distortion analysis: All mode, Part mode, and their difference (a–b).
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Fig 4.

Figure 4.Location of GCPs.
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Fig 5.

Figure 5.Comparison of GCP and CP positional and pixel errors between All mode and Part mode (x-axis: number of GCPs).
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Fig 6.

Figure 6.Positional errors of GCP coordinates with random noise input.
Korean Journal of Remote Sensing 2025; 41: 87-100https://doi.org/10.7780/kjrs.2025.41.1.8

Table 1 . Comparison of UAV flight data and camera settings by flight path.

ParameterN-S DirectionE-W Direction
Start time15:33:3115:45:17
End time15:42:4815:54:52
Flight time00:09:1700:09:35
Photo count105111
Flight height (m)7070
Number of strips612
F-stop3.2–4.52.8–4.0
Overlap (%)9090
Sidelap (%)9090
ISO100–110100–110
Shutter speed1/4001/400
Image quality (Mean±SD)0.855±0.0230.863±0.019

ISO: International Standard Organization, SD: Standard Deviation..


Table 2 . Specification of Mavic 3 Enterprise (M3E).

General SpecificationValueCamera SpecificationValue
Max speed (m/s)15Focal length (mm)12.29 (24/35 mm)
Max flight Time (min)45F-stopf/2.8–f/11
GSD (cm)2.86/100 × FH(m)ISO100–6400
Image size5280 × 3956 (20MP)Shutter speed1/2000-8
Field of view (°)84Shutter typeMechanical
Sensor4/3CMOSCCD size (mm)17.73 × 13.29

Table 3 . Specification of GRX2.

SpecificationValue
Tracked SignalsGPS, GLONASS, SBAS
Number of Channels226
Positioning Accuracy (L1+L2)TypeHorizontalVertical
Static3mm + 0.5 ppm5mm + 0.5 ppm
Fast Static3mm + 0.5 ppm5mm + 0.5 ppm
Kinematic10mm + 1 ppm15mm + 1 ppm
RTK10mm + 1 ppm15mm + 1 ppm
Positioning AccuracyDGPS: < 0.5m
Update/Output rate1Hz, 5Hz, 10Hz, 20Hz (10Hz RTK Standard)
Physical SpecificationsSize: Dia. 184 mm x H 95 mm, Weight: 1.0 kg (2.20 lb.)

Table 4 . Measurements count by total station and GNSS receiver, GCP coordinates, and standard deviations (Unit: m).

No.Total StationGNSSEasting (X)Northing (Y)Height (Z)SD (X)SD (Y)SD (Z)
Base11208,213.370280,231.334110.427
011208,171.243280,161.693110.419
12208,186.660280,131.747110.3450.0040.003
221208,167.470280,143.805110.3820.0230.0190.008
42208,134.579280,164.153110.3830.0040.002
52208,153.061280,193.443110.3600.001
621208,172.131280,181.181110.5070.0130.0130.013
721208,188.867280,171.596110.4540.0060.0050.009
82208,205.180280,160.774110.3320.0030.0120.003
92208,216.832280,180.520110.3470.0040.002
1021208,209.429280,182.120110.4010.0130.0130.016
1121208,200.048280,189.952110.4620.0140.0120.040
1221208,184.312280,202.069110.5270.0120.0060.013
132208,165.218280,212.792110.3630.004
142208,172.761280,225.059110.3670.0030.0070.002
1521208,193.366280,216.145110.5120.0150.0060.009
1621208,210.515280,207.497110.4640.0040.0130.035
1721208,228.174280,197.557110.3320.0040.0240.034
182208,242.606280,221.518110.4000.0060.000
1921208,224.028280,230.563110.4160.0080.0040.022
2021208,208.466280,240.887110.4180.0220.0040.028
2121208,191.048280,253.412110.3900.0230.0100.031

Table 5 . Standard deviation of GCP coordinates measured using total station and GNSS receiver.

InstrumentCountSD (X)SD (Y)SD (Z)
Total Station (GCP)420.0020.0040.002
GNSS Receiver (Baseline)140.0330.0660.022

Table 6 . Description of the camera calibration parameters.

ParameterDescriptionParameterDescription
X, Y, ZPoint coordinates in the local camera coordinate systemw, hImage width and height (in pixels)
x, yX/Z, Y/Zx’, y’Divide Focal length corrected lens distortion
X0, Y0, Z0Camera center coordinatesrijRotation matrix elements (ω, ø, κ)

Table 7 . Comparison of camera calibration parameters and Lat Lon/XY errors for All and Part modes.

ParameterAllPart
ValueErrorValueError
Focal lengthf3702.098750.0623023705.23210.06365
Principal point offsetcx27.00520.0198926.8760.017711
cy-3.636470.020407-3.711580.017975
Skew coefficientsb10.06406830.00229
b2-0.1783950.0023
Radial distortion coefficientsk1-0.07513820.0000267-0.09324750.0000165
k2-0.1562280.000125-0.05220550.0000538
k30.2590750.000230.03686720.0000559
k4-0.1566880.000142
Tangential distortion coefficientsp10.00002194640.0000007448.53919e-060.000000656
p2-0.00005543430.000000712-0.00005624570.000000622
Lat Lon0.0510.078
XY0.0350.051

Table 8 . Comparison of lens, radial, and tangential distortion between All mode and Part mode.

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
AllMin0.0000.000-0.026-0.342172.33553.9340.0000.0000.000
Max239.558179.4870.3040.0005161.9233894.441299.787299.3390.457
Average50.63333.5910.050-0.0962667.0551974.26864.06464.0640.115
Standard deviation53.28934.8780.0660.0661475.9481113.92960.36760.3670.084
PartMin0.0000.000-0.101-0.308174.68455.3730.0000.0000.000
Max233.251174.7610.2160.0005159.1653892.848291.815291.4570.376
Average51.70834.4000.020-0.0972666.8961974.19165.51965.5180.108
Standard deviation53.73435.1270.0560.0641474.6961113.00660.71060.7100.073

Table 9 . Differences in lens, radial, and tangential distortion between All mode and Part mode.

XradialYradialXtangentialYtangentialuvLens distortionRadial distortionTangential distortion
Min–2.704–2.0250.000–0.034–6.131–4.685–2.958–2.875–0.061
Max6.3074.7260.0920.0426.5744.8437.9807.8810.081
Average–1.075–0.8080.0310.0010.1600.076–1.455–1.4550.008
Standard deviation0.7690.5740.0240.0131.3470.9910.7830.7820.026

Table 10 . Difference of GCP positional errors for All and Part mode.

East errNorth errAlt errError (m)Error (pix)
Original0.0030.0010.0200.0162.616
0.030.001-0.0180.0142.616
0.050.002-0.0020.0012.616
0.070.0020.0020.0160.0122.616
0.10–0.001-–0.003–0.0032.616
0.200.001-0.0150.0082.615
0.500.0020.0010.0080.0062.611
1.000.0020.0010.0010.0022.616

References

  1. Agisoft LLC, 2025. Agisoft metashape user manual (Professional edition, version 2.2). Available online: https://www.agisoft.com/pdf/metashape-pro_2_2_en.pdf (accessed on Jan. 13, 2025)
  2. Alemán-Flores, M., Alvarez, L., Gomez, L., and Santana-Cedrés, D., 2013. Wide-Angle lens distortion correction using division models. In: Ruiz-Shulcloper, J., Sanniti di Baja, G., (eds.), Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, Springer, pp. 415-422. https://doi.org/10.1007/978-3-642-41822-8_52
  3. Beauchemin, S. S., and Bajcsy, R., 2001. Modelling and removing radial and tangential distortions in spherical lenses. In: Klette, R., Gimel'farb, G., Huang, T., (eds.), Multi-Image Analysis, Springer, pp. 1-21. https://doi.org/10.1007/3-540-45134-X_1
  4. DJI Mavic 3 Enterprise, 2024. DJI Enterprise. Available online: https://enterprise.dji.com/mavic-3-enterprise/specs (accessed on Dec. 23, 2024)
  5. Harwin, S., and Lucieer, A., 2012. Assessing the accuracy of georeferenced point clouds produced via multi-view stereopsis from unmanned aerial vehicle (UAV) imagery. Remote Sensing, 4(6), 1573-1599. https://doi.org/10.3390/rs4061573
  6. Kang, J. A., Kim, B. G., and Park, J. M., 2008. The research for the wide-angle lens distortion correction by photogrammetry techniques. Journal of the Korean Socieyty of Surveying, Geodesy, Photogrammetry, and Cartography, 26(2), 103-110.
  7. Kang, J. A., Nam, S. K., Kim, T. H., and Oh, Y. S., 2009. The fisheye lens distortion correction of facilities monitoring CCTV. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, 27(3), 323-330.
  8. Kim, E. M., Sohn, H. G., and Kim, G. H., 2004. Determination of exterior orientation parameters using the projective transformation in close range photogrammetry. Journal of the Korean Society of Civil Engineers D, 24(3d), 463-467.
  9. Kim, J. S., and Hong, I. Y., 2020. Accuracy analysis of photogrammetry based on the layout of ground control points using UAV. Journal of the Korean Cartographic Association, 20(2), 41-55. https://doi.org/10.16879/jkca.2020.20.2.041
  10. Kim, Y. D., Park, B. W., and Lee, H. S., 2018. Accuracy analysis according to GCP layout type and flying height in orthoimage generation using low-cost UAV. Journal of the Korean Society for Geospatial Information Science, 26(3), 31-39. https://doi.org/10.7319/kogsis.2018.26.3.031
  11. Korea Meteorological Administration, 2024. Past weather observation by day. Available online: https://www.weather.go.kr/w/observation/land/past-obs/obs-by-day.do?stn=159&yy=2024&mm=11&obs=1 (accessed on Dec. 27, 2024)
  12. Lee, J. P., 2021. Quality assessment of digital surface model vertical position accuracies by ground control point location. Journal of Cadastre & Land InformatiX, 51(1), 125-136. https://doi.org/10.22640/LXSIRI.2021.51.1.125
  13. Lee, K., Han, Y., and Lee, W. H., 2018. Comparison of orthophotos and 3D models generated by UAV-based oblique images taken in various angles. Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography, 36(3), 117-126. https://doi.org/10.7848/ksgpc.2018.36.3.117
  14. Lee, Y. J., Park, H. J., Kim, H. S., and Kim, T. J., 2020. Analysis of geolocation accuracy of precision image processing system developed for CAS-500. Korean Journal of Remote Sensing, 36(5-2), 893-906. https://doi.org/10.7780/kjrs.2020.36.5.2.4
  15. Levenberg, K., 1944. A method for the solution of certain nonlinear problems in least squares. Quarterly of Applied Mathematics, 2, 164-168. https://doi.org/10.1090/qam/10666
  16. Moore, Z. E., Wright, D., Schinstock, D., and Lewis, C., 2009. Comparison of bundle adjustment formulations. In Proceedings of the 2009 ASPRS Annual Conference, Baltimore, MD, USA, Mar. 9-13, pp. 1-9.
  17. National Geographic Information Institute, 2025. Orthogonal imagery. Available online: https://www.ngii.go.kr/kor/content.do?sq=203 (accessed on Jan. 6, 2025)
  18. Nho, H. J., Shin, D. Y., Sohn, H. G., and Kim, S. S., 2020. Fast geocoding of UAV images for disaster site monitoring. Korean Journal of Remote Sensing, 36(5-4), 1221-1229. https://doi.org/10.7780/kjrs.2020.36.5.4.7
  19. Shylesh, D. S., Dharshan, Manikandan, N., Sivasankar, Surendran, D., Jaganathan, R., and Mohan, G., 2023. Influence of quantity, quality, horizontal and vertical distribution of ground control points on the positional accuracy of UAV survey. Applied Geomatics, 15, 897-917. https://doi.org/10.1007/s12518-023-00531-w
  20. Triggs, B., McLauchlan, P. F., Hartley, R. I., and Fitzgibbon, A. W., 2000. Bundle adjustment - A modern synthesis. In: Triggs, B., Zisserman, A., Szeliski, R., (eds.), Vision Algorithms: Theory and Practice (IWVA 1999), Springer, pp. 298-372. https://doi.org/10.1007/3-540-44480-7_21
  21. Yoo, Y. H., Choi, J. W., Choi, S. K., and Jung, S. H., 2016. Quality evaluation of orthoimage and DSM based on fixed-wing UAV corresponding to overlap and GCPs. Journal of Korean Society for Geospatial Information Science, 24(3), 3-9. https://doi.org/10.7319/kogsis.2016.24.3.003
  22. Yun, B. Y., and Sung, S. M., 2018. Location accuracy of unmanned aerial photogrammetry results according to change of number of ground control points. Journal of the Korean Association of Geographic Information Studies, 21(2), 24-33. https://doi.org/10.11108/KAGIS.2018.21.2.024
  23. Yun, B. Y., and Yoon, W. S., 2018. A study on the improvement of orthophoto accuracy according to the flight photographing technique and GCP location distance in orthophoto generation using UAV. Journal of the Korean Society of Industry Convergence, 21(6), 345-354. https://doi.org/10.21289/KSIC.2018.21.6.345
KSRS
February 2025 Vol. 41, No.1, pp. 1-86

Metrics

Share

  • line

Related Articles

Korean Journal of Remote Sensing