Korean J. Remote Sens. 2024; 40(1): 103-114
Published online: February 28, 2024
https://doi.org/10.7780/kjrs.2024.40.1.10
© Korean Society of Remote Sensing
Correspondence to : Taejung Kim
E-mail: tezid@inha.ac.kr
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The escalating demands for high-resolution satellite imagery necessitate the dissemination of geospatial data with superior accuracy. Achieving precise positioning is imperative for mitigating geometric distortions inherent in high-resolution satellite imagery. However, maintaining sub-pixel level accuracy poses significant challenges within the current technological landscape. This research introduces an approach wherein upsampling is employed on both the satellite image and ground control points (GCPs) chip, facilitating the establishment of a high-resolution satellite image precision sensor orientation. The ensuing analysis entails a comprehensive comparison of matching performance. To evaluate the proposed methodology, the Compact Advanced Satellite 500-1 (CAS500-1), boasting a resolution of 0.5 m, serves as the high-resolution satellite image. Correspondingly, GCP chips with resolutions of 0.25 m and 0.5 m are utilized for the South Korean and North Korean regions, respectively. Results from the experiment reveal that concurrent upsampling of satellite imagery and GCP chips enhances matching performance by up to 50% in comparison to the original resolution. Furthermore, the position error only improved with 2x upsampling. However, with 3x upsampling, the position error tended to increase. This study affirms that meticulous upsampling of high-resolution satellite imagery and GCP chips can yield sub-pixel-level positioning accuracy, thereby advancing the state-of-the-art in the field.
Keywords High-resolution satellite, Sensor orientation, Geometric correction, GCP matching, CAS500-1
Recent advancements in satellite image utilization technology have led to growing demands for high-resolution satellite imagery. In response, Korea has developed and launched a Korean Multi-Purpose Satellite (KOMPSAT) with a resolution of 0.55 m. Furthermore, Compact Advanced Satellite 500 (CAS500) with a resolution of 0.5 m has been developed (Lee et al., 2017; Kim, 2020). Korea is currently participating in the development of comprehensive land management technology using satellite image information big data to improve land management technology (Kim, 2023).
The correction of geometric distortions caused by errors in satellite global positioning system (GPS) receivers and attitude control sensors is necessary when using satellite imagery. This correction involves aligning image coordinates with ground coordinates using ground control points (GCPs). The establishment of a precise sensor orientation through this process is crucial in improving the accuracy of positioning in high-resolution satellite imagery. In this context, even minor errors can have a significant impact, limiting the achievable positioning accuracy to a range of 1–1.5 pixels (Choi et al., 2023).
Traditionally, establishing precise sensor orientation requires manually acquiring GCPs, which incurs significant time and economic costs. To address this challenge, previous studies have introduced GCP chips - small image fragments with precise ground coordinates. They have developed a technology that can automatically match GCP chips and satellite images to establish an automatic precision sensor orientation (Park et al., 2020; Yoon, 2019).
The use of GCP chips for precise sensor orientation establishment automation provided a cost-effective means of producing and distributing calibrated satellite images. As a result, researchers have actively pursued studies aimed at achieving higher positioning accuracy through the establishment of automatic precise sensor orientation. Oh et al. (2022) investigated the feasibility of automating the establishment of high-resolution satellite image precision sensor orientation using GCP chips.
Meanwhile, Shin et al. (2018) conducted an experiment in GCP chip matching using pan-sharpened images to establish an automatic precision sensor orientation tailored for high-resolution satellite imagery. The findings indicated a significant improvement in matching performance when using pan-sharpened images compared to original multispectral images. Lee and Kim (2021) investigated the effectiveness of using high-resolution GCP chips to establish a precise sensor orientation for medium-resolution satellite imagery. They used satellite image upsampling instead of pan-sharpening to improve the matching performance of medium-resolution satellite images without pan bands.
The study’s results indicated that upsampling to match at the medium resolution was more effective in terms of geometric accuracy than matching at the original resolution. Choi and Kim (2022a) conducted a follow-up study in which they adjusted the number of chips used to establish the precision sensor orientation during satellite image upsampling. This was done to reduce the time required to establish a medium-resolution automatic precision sensor orientation and improve GCP chip matching performance. Previous studies have shown that the performance of GCP chip matching is affected by the spatial resolution of both the GCP chip and satellite imagery.
The objective of this study is to develop an improved method for establishing automatic precision sensor orientation in high-resolution satellite images. To achieve this, we conducted chip matching by upsampling both the satellite and GCP chip images. We then performed a comprehensive comparative analysis of the results using the established precision sensor orientation. The study concluded that upsampling both the high-resolution satellite images and the GCP chip images can improve the performance of the sensor orientation when matching them.
This study utilized the pre-existing spatial information data of the Korean Peninsula, as described in a previous study (Park et al., 2020). The GCP chips for the South Korean region were created using precise aerial orthoimages with a spatial resolution of 0.25 m. It includes three different types of ground coordinate information (Yoon et al., 2018). These three types of ground coordinate information include the unified control point, triangulation point, and image control point. The Korea National Geographic Information Institute generates and manages the unified control point and triangulation point. The image control point is used in the production of the national basic map and aerial orthoimage.
In the case of North Korea, it is difficult to acquire and continuously manage GCP because it is inaccessible. Therefore, the GCP chips for the North Korean region were created using satellite orthoimages with a spatial resolution of 0.5 m. It includes orthoimagery plane coordinates and digital elevation model (DEM) ellipsoid height. Therefore, The GCP chips in the South Korean region were created from color images extracted from aerial orthoimagery. In contrast, the GCP chips in the North Korean region were created from single-band images extracted from satellite orthoimagery generated from panchromatic images. Table 1 provides detailed specifications of the GCP chips used. An example of the GCP chips is shown in Fig. 1. Fig. 1(a) shows GCP chips in South Korea, while Fig. 1(b) shows GCP chips in North Korea.
Table 1 Specifications for GCP chip used
Area | South Korea | North Korea |
---|---|---|
Ground coordinates reference | Unified control point (UCP), Image control point (ICP), Triangulation point (TP) | Plane coordinates of the orthogonal image, DEM ellipsoidal height |
Source image | Aerial orthoimage | Satellite orthoimage |
Chip size | 1,027 x 1,027 pixel | 513 x 513 pixel |
GSD | 0.25 m | 0.50 m |
Band | Red, Green, Blue | Gray |
GSD: ground sample distance.
The CAS500-1 image, a high-resolution satellite image with a spatial resolution of 0.5 meters, was utilized in this study. The specifications of the satellite images are shown in Table 2. The study area includes Sejong, Andong, and Pyongyang. Table 3 presents a list of the satellite images used in this study. Additionally, the study area’s geographic distribution is displayed in Figs. 2 to 4.
Table 2 Specifications for satellite images used
Satellite image | CAS500-1 |
---|---|
Product Level | Level 1R |
Spectral resolution | Panchromatic: 450–900 nm Multispectral: 450–900 nm (Blue, Green, Red, NIR) |
GSD | Panchromatic: 0.5 m Multispectral: 2.0 m |
Orbit | Circular sun-synchronous (500 km) |
Swath | ≥ 12km |
Radiometric resolution | 12 bits |
Table 3 List of used satellite images
Data | Acquisition date | Upper left coordinate (Lat, Lon) | Bottom low coordinate (Lat, Lon) | Quantity of GCP chip |
---|---|---|---|---|
Sejong 1 | 2021.12.07 | 36.509042465, 127.266395653 | 36.426378705, 127.427498911 | 56 |
Sejong 2 | 2021.12.07 | 36.612209951, 127.238683259 | 36.529558877, 127.400014661 | 40 |
Sejong 3 | 2022.02.27 | 36.625661023, 127.201747702 | 36.553286018, 127.406668861 | 68 |
Sejong 4 | 2022.03.03 | 36.533186804, 127.227716611 | 36.450030787, 127.390117310 | 55 |
Sejong 5 | 2023.03.16 | 36.506973164, 127.244234044 | 36.427195422, 127.414961162 | 58 |
Sejong 6 | 2023.03.16 | 36.609898079, 127.214833673 | 36.530137740, 127.385835309 | 40 |
Andong 1 | 2021.11.11 | 36.724915722, 128.875059965 | 36.642481614, 129.038039928 | 39 |
Andong 2 | 2022.06.12 | 36.619879282, 128.688020424 | 36.537777343, 128.850648011 | 36 |
Andong 3 | 2022.12.02 | 36.449640721, 128.893645792 | 36.368206234, 129.058636036 | 43 |
Pyongyang 1 | 2021.10.19 | 39.156498918, 125.645494344 | 39.073884439, 125.839485291 | 92 |
Pyongyang 2 | 2023.01.10 | 39.004960711, 125.710489853 | 38.922023330, 125.888358084 | 74 |
Fig. 5 presents a schematic of the procedure for establishing automatic precision sensor orientation using GCP chip matching, as applied in this paper.
Based on the findings of Shin et al. (2018), we conducted image fusion, also known as pan-sharpening, to enhance the matching accuracy between high-resolution satellite images and GCP chips. Pan-sharpening is a technology that combines high-resolution panchromatic band images with low-resolution multi-spectral band images to reconstruct high-resolution color images. This study utilized an algorithm based on the component-substitution (CS) fusion technique, as described in previous works (Park et al., 2020; Vivine et al., 2015).
This paper proposed establishing the initial sensor orientation first. The sensor orientation was established using the rational function model (RFM) for this study. The RFM is a mathematical sensor model mainly used for satellite imagery. It is a model expression that mathematically expresses the relationship between image coordinates and ground coordinates, without physical sensor information. It can be established by applying the rational polynomial coefficients (RPCs) provided with the original satellite image to the formula. Eq. (1) and (2) delineate the formulas of the RFM corresponding to the columns and rows of the image.
The formula provided uses cn and rn to represent the normalized image coordinates, and Xn, Yn, and Zn to denote the normalized ground coordinates. In the given equation, aijk, bijk, cijk, and dijk are the coefficients of the rational polynomials P1, P2, P3, and P4, respectively. These coefficients constitute the rational function expression defining the relationship between the image’s column, row, and ground coordinates. We set the values of the RPCs for i + j + k ≥ 4 to zero to formulate the RFM model expression using only the coefficients of the third term or lower in the ground coordinates. Additionally, we set b000 and d000 to 1 to fix the proportionality constant of the rational function expression. The remaining 78 coefficients for ground coordinate terms with i + j + k from 1 to 3 were provided as RPC files with satellite images.
Using the established initial sensor orientation, we conducted a comprehensive search for all GCP chips within the entire coverage of the satellite image. The image corner coordinates, derived from the initial sensor orientation, were used to calculate the minimum rectangular area encompassing the image region. To account for potential errors in the initial sensor orientation, a margin was incorporated into the image area. The margin dimensions were set at 250 meters in both the x and y directions of the satellite image. High-resolution satellites have a limited field of view and are affected by the area’s topography. Therefore, the margin size was determined based on the search for stable GCP chips and the potential for overestimation to establish accurate sensor orientation. The search for GCP chips was confined to the area within which the margin was applied, referencing the GCP database.
Subsequently, we defined the region surrounding each GCP chip and executed upsampling of the satellite image to the desired scale. Previous research has shown that bilinear interpolation is advantageous for upsampling satellite images in terms of both performance and time efficiency (Choi and Kim, 2022).
Therefore, we adopted bilinear interpolation for the upsampling process. In this study, we applied upsampling factors ranging from 1 to 3 times for each set of experimental data, resulting in spatial resolutions of 0.5 m, 0.25 m, and 0.167 m. The corresponding upsampling factors are shown in Fig. 6. Fig. 6 illustrates the original satellite image transformed according to the upsampling factor. It was evident that a larger upsampling factor leads to a smoother appearance of the image. To save processing speed and memory, the upsampling was only applied to the navigation area relative to the GCP, rather than the entire original satellite image.
Afterward, we upsampled and remapped the GCP chip image to match the geometry of the satellite image and its newly achieved spatial resolution. To ensure automatic matching using an area-based image-matching algorithm, we deformed the GCP chip to synchronize its resolution and geometry with that of the satellite image. Fig. 7 presents a depiction of the resampling process for the GCP chip, considering the geometry and resolution of the satellite image. This procedure was executed for all identified GCP chips.
Next, we created an image pyramid to improve processing efficiency for image matching. Four levels of image pyramids were constructed for each satellite image patch. The matching process began at the top layer of the pyramid and used the Census algorithm in the last layer and the zero-mean normalized cross-correlation (ZNCC) algorithm in the remaining layers, except for the last one. The previous study (Yoon, 2019) has shown that the ZNCC algorithm is robust to image size variations, but its accuracy in extracting matching points is somewhat lower. In contrast, the Census algorithm has relatively good accuracy in extracting matching points but is sensitive to variations in image size.
Therefore, we used the Census algorithm for the original image and the ZNCC algorithm for the other reduced images. Despite our efforts, there were still some mismatch points among the automatically matched GCPs. To address this issue, we utilized the random sample consensus (RANSAC) technique (Fischler and Bolles, 1981) to automatically remove them. The detailed process of GCP chip matching and mismatch point removal is omitted here for brevity and to avoid redundancy, as it has been covered in previous research papers (Yoon et al., 2018; Son et al., 2021).
As the final step, we established a precise sensor orientation based on the auto-matching results with removed mismatches. We estimated the coefficients of the following error correction equation to rectify any errors in the initial sensor orientation.
In the equation above, c0, r0, c, and r represent the image coordinates before and after correction, respectively. The coefficients of the error correction formula are represented by a11 to a23. After estimating the coefficients of the error correction equation, the RPC coefficients were recalculated using this equation to establish precise sensor orientation. The precision sensor orientation was established for each upsampling factor and then converted back to the original resolution for accuracy comparison (Lee and Kim, 2021). The accuracy of the precision sensor orientation was compared for each upsampling factor.
In this study, model error and check error were employed as performance evaluation metrics. Model error represents the relative root mean square error (rRMSE) measured for the GCPs used to refine the initial sensor model. Check error, on the other hand, is expressed as the rRMSE measured for checkpoints manually extracted from reference points not included in the model points. Eq. (5) details the calculation of rRMSE, where RMSEcol represents the RMSE in the column direction of the image, and RMSErow represents the RMSE in the row direction of the image. Accuracy analyses were conducted based on the original image with a spatial resolution of 0.5 m.
Table 4 shows the number of GCP chips within the acquisition area and the manually acquired number of GCP chips for verification across the 11 CAS500-1 satellite images used in the experiment.
Table 4 Quantity of GCP chips per experimental case
Data | Quantity of GCP chips | Quantity of checkpoint |
---|---|---|
Sejong 1 | 56 | 6 |
Sejong 2 | 40 | 6 |
Sejong 3 | 68 | 6 |
Sejong 4 | 55 | 6 |
Sejong 5 | 58 | 7 |
Sejong 6 | 40 | 6 |
Andong 1 | 39 | 5 |
Andong 2 | 36 | 4 |
Andong 3 | 43 | 5 |
Pyongyang 1 | 92 | 10 |
Pyongyang 2 | 74 | 8 |
The experimental results are presented in Table 5, and the performance evaluation indicator graphs for each study area are shown in Figs. 8 to 18, illustrating the observed trends in the experimental outcomes. In each figure, (a) and (b) represent the model error and check error, respectively, as functions of the number of GCPs and the upsampling scale.
Table 5 Matching result
Sejong 1 | Sejong 2 | |||||||
---|---|---|---|---|---|---|---|---|
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.285 | 1.546 | 45% | 0.500 | 1.198 | 1.125 | 50% | |
0.250 | 0.626 | 1.036 | 36% | 0.250 | 0.624 | 0.963 | 35% | |
0.167 | 0.403 | 0.738 | 34% | 0.167 | 0.423 | 1.043 | 38% | |
0.125 | 0.277 | 1.807 | 36% | 0.125 | 0.346 | 1.081 | 45% | |
Sejong 3 | Sejong 4 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.367 | 2.039 | 34% | 0.500 | 1.230 | 0.936 | 45% | |
0.250 | 0.739 | 1.419 | 25% | 0.250 | 0.605 | 0.632 | 36% | |
0.167 | 0.502 | 4.363 | 28% | 0.167 | 0.477 | 2.651 | 40% | |
0.125 | 0.352 | 3.590 | 22% | 0.125 | 0.324 | 0.881 | 33% | |
Sejong 5 | Sejong 6 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.433 | 1.573 | 41% | 0.500 | 1.312 | 2.225 | 55% | |
0.250 | 0.661 | 0.714 | 34% | 0.250 | 0.559 | 1.665 | 35% | |
0.167 | 0.465 | 2.233 | 33% | 0.167 | 0.386 | 2.387 | 43% | |
0.125 | 0.294 | 1.809 | 34% | 0.125 | 0.335 | 2.639 | 40% | |
Andong 1 | Andong 2 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.174 | 1.866 | 41% | 0.500 | 1.034 | 1.396 | 47% | |
0.250 | 0.666 | 1.476 | 36% | 0.250 | 0.600 | 1.211 | 39% | |
0.167 | 0.446 | 2.054 | 31% | 0.167 | 0.435 | 1.002 | 33% | |
0.125 | 0.328 | 1.773 | 44% | 0.125 | 0.325 | 1.549 | 31% | |
Andong 3 | Pyongyang 1 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.292 | 1.453 | 42% | 0.500 | 1.346 | 2.189 | 41% | |
0.250 | 0.654 | 1.340 | 37% | 0.250 | 0.705 | 1.860 | 27% | |
0.167 | 0.418 | 1.315 | 42% | 0.167 | 0.435 | 4.302 | 24% | |
0.125 | 0.304 | 2.349 | 28% | 0.125 | 0.347 | 3.141 | 21% | |
Pyongyang 2 | ||||||||
GSD (m) | Error (pixel) | Matching rate | ||||||
Model | Check | |||||||
0.500 | 1.477 | 1.932 | 35% | |||||
0.250 | 0.561 | 1.001 | 29% | |||||
0.167 | 0.438 | 1.979 | 26% | |||||
0.125 | 0.341 | 3.217 | 26% |
Part (a) of each experiment shows that the model error consistently decreased as the upsampling factor increased and the ground sample distance (GSD) became smaller. These results indicated that coefficient estimation was successfully applied in the error correction equation for the RPC update. It can be interpreted that the higher the upsampling factor was applied, the closer the GCPs were to the estimated error correction, the closer they were selected as model points, and consequently, the smaller the GSD compared to the original resolution, the smaller the model estimation error.
Part (b) of each experiment demonstrates a decrease in check error when upsampling by 2x from the original resolution for all experiments. However, when upsampling by 3x, the error rate generally increased again. The 4x results also showed inconsistent results for each experiment. This observation is consistent with previous studies, such as Lee and Kim (2021), which have shown that excessive upsampling scaling relative to the original data reduces model estimation error but does not necessarily result in improved accuracy.
When establishing the orientation of a precision sensor at the original resolution, the check error exceeded 2 pixels in many cases, with an average check error of 1.7 pixels. When establishing the orientation of a precision sensor using an upsampling factor of 2, the check error was less than 2 pixels in all experiments, with an average check error of 1.2 pixels. We achieved a 50% improvement over the original resolution. In several experiments, where the original resolution resulted in check errors of over 1 pixel, performance considerably improved when the 2x upsampling factor was applied, resulting in less than 1 pixel check error.
To validate the experimental results objectively, we compared the matching results for each resolution. Fig. 19 shows the results of the matching GCP chip. The matching results of the GCP chip at the original resolution of 0.5 m were roughly aligned to the resolution grid. However, the matching results of the GCP chip at 0.25 m upsampled were more precisely aligned. However, when upsampled to 0.167 meters, the GCP chip over-interpolates, resulting in a smoothed image that makes it difficult to find the exact match point.
In this chapter, we delve into the distinctions between our findings and those of preceding studies utilizing medium-resolution satellite imagery (Lee and Kim, 2021; Choi and Kim, 2022a). Previous research utilizing medium-resolution satellite imagery, with a resolution of 5 m and high-resolution GCP chips featuring a resolution of 0.25 m, identified an optimal upsampling ratio of 3 times, corresponding to a GSD of 1.67 m. In our present study, employing high-resolution satellite imagery at a resolution of 0.5 m along with a GCP chip sharing the same specifications, the optimal upsampling ratio was determined to be 2 times, equating to a GSD of 0.25 m.
In the case of North Korea, both the satellite image and the GCP chip had a resolution of 0.5 m. However, the results were still improved by upsampling to 0.25 m, which may be due to a limitation of the algorithm used for chip matching. The image-matching algorithm used in this study was a pixel-by-pixel similarity search method. In summary, when upsampled to 0.25 m, the similarity search interval was applied to 0.5 pixels of the original satellite image. This improvement was believed to be a result of this adjustment.
During the study, a decrease in accuracy was observed when the upsampling ratio was tripled, which deviates from the findings in low- and medium-resolution images. This phenomenon may be attributed to disparities in the sharpness of the original satellite image or the act of upsampling to a GSD higher than the chip resolution. Future research should investigate whether the clarity of the original image or the chip resolution has a greater impact on matching results. This is crucial for advancing our understanding and achieving higher accuracy in the context of high-resolution satellite images.
In this study, we applied upsampling to both satellite images and pre-existing high-resolution GCP chips to establish a highresolution satellite image precision sensor orientation. We subsequently conducted a comparative analysis of the matching performance. The experimental outcomes confirmed a notable enhancement in matching performance when upsampling was applied to both satellite images and GCP chips.
When establishing a precision sensor orientation with a 2x upsampling factor, an improvement of up to 50% was observed compared to the original resolution. In certain experimental outcomes, the check error approached or fell below 1 pixel. Due to technical constraints, achieving sub-pixel positioning accuracy when establishing precision sensor orientation on high-resolution satellite imagery at its original resolution was challenging. However, our results demonstrated that subpixel-level positioning accuracy can be achieved through meticulous upsampling of both satellite images and GCP chips.
Moreover, despite employing different specifications for GCP chips in the South and North Korean regions, the experimental results exhibited comparable trends. Through this, we confirmed that if it is possible to build GCP chips in inaccessible areas such as North Korea as well as South Korea, this method is sufficiently applicable, and it is possible to establish an improved precision sensor orientation.
In this study, we tested a GCP chip with a GSD similar to that of satellite images. However, if a more sophisticated GCP chip is built and utilized, the position error could be further reduced through the method proposed in this study. Therefore, further research using high-resolution and high-accuracy GCP chips is needed for precise sensor modeling of high-resolution satellite images. It is also necessary to consider the use of ultra-high-resolution data such as drones. We hope that this research will help in the processing and utilization of high-resolution satellite images.
This work was carried out with the support of “Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ 016233)” Rural Development Administration, Republic of Korea. This work is supported by the Korea Agency for Infrastructure Technology Advancement grant funded by the Ministry of Land, Infrastructure and Transport (Grant No. RS-2022-00155763).
No potential conflict of interest relevant to this article was reported.
Korean J. Remote Sens. 2024; 40(1): 103-114
Published online February 28, 2024 https://doi.org/10.7780/kjrs.2024.40.1.10
Copyright © Korean Society of Remote Sensing.
Hyeon-Gyeong Choi1, Sung-Joo Yoon2, Sunghyeon Kim3, Taejung Kim4*
1Master Student, Department of Geoinformatic Engineering, Inha University, Incheon, Republic of Korea
2PhD Candidate, Department of Geoinformatic Engineering, Inha University, Incheon, Republic of Korea
3Undergraduate Student, Department of Geoinformatic Engineering, Inha University, Incheon, Republic of Korea
4Professor, Department of Geoinformatic Engineering, Inha University, Incheon, Republic of Korea
Correspondence to:Taejung Kim
E-mail: tezid@inha.ac.kr
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
The escalating demands for high-resolution satellite imagery necessitate the dissemination of geospatial data with superior accuracy. Achieving precise positioning is imperative for mitigating geometric distortions inherent in high-resolution satellite imagery. However, maintaining sub-pixel level accuracy poses significant challenges within the current technological landscape. This research introduces an approach wherein upsampling is employed on both the satellite image and ground control points (GCPs) chip, facilitating the establishment of a high-resolution satellite image precision sensor orientation. The ensuing analysis entails a comprehensive comparison of matching performance. To evaluate the proposed methodology, the Compact Advanced Satellite 500-1 (CAS500-1), boasting a resolution of 0.5 m, serves as the high-resolution satellite image. Correspondingly, GCP chips with resolutions of 0.25 m and 0.5 m are utilized for the South Korean and North Korean regions, respectively. Results from the experiment reveal that concurrent upsampling of satellite imagery and GCP chips enhances matching performance by up to 50% in comparison to the original resolution. Furthermore, the position error only improved with 2x upsampling. However, with 3x upsampling, the position error tended to increase. This study affirms that meticulous upsampling of high-resolution satellite imagery and GCP chips can yield sub-pixel-level positioning accuracy, thereby advancing the state-of-the-art in the field.
Keywords: High-resolution satellite, Sensor orientation, Geometric correction, GCP matching, CAS500-1
Recent advancements in satellite image utilization technology have led to growing demands for high-resolution satellite imagery. In response, Korea has developed and launched a Korean Multi-Purpose Satellite (KOMPSAT) with a resolution of 0.55 m. Furthermore, Compact Advanced Satellite 500 (CAS500) with a resolution of 0.5 m has been developed (Lee et al., 2017; Kim, 2020). Korea is currently participating in the development of comprehensive land management technology using satellite image information big data to improve land management technology (Kim, 2023).
The correction of geometric distortions caused by errors in satellite global positioning system (GPS) receivers and attitude control sensors is necessary when using satellite imagery. This correction involves aligning image coordinates with ground coordinates using ground control points (GCPs). The establishment of a precise sensor orientation through this process is crucial in improving the accuracy of positioning in high-resolution satellite imagery. In this context, even minor errors can have a significant impact, limiting the achievable positioning accuracy to a range of 1–1.5 pixels (Choi et al., 2023).
Traditionally, establishing precise sensor orientation requires manually acquiring GCPs, which incurs significant time and economic costs. To address this challenge, previous studies have introduced GCP chips - small image fragments with precise ground coordinates. They have developed a technology that can automatically match GCP chips and satellite images to establish an automatic precision sensor orientation (Park et al., 2020; Yoon, 2019).
The use of GCP chips for precise sensor orientation establishment automation provided a cost-effective means of producing and distributing calibrated satellite images. As a result, researchers have actively pursued studies aimed at achieving higher positioning accuracy through the establishment of automatic precise sensor orientation. Oh et al. (2022) investigated the feasibility of automating the establishment of high-resolution satellite image precision sensor orientation using GCP chips.
Meanwhile, Shin et al. (2018) conducted an experiment in GCP chip matching using pan-sharpened images to establish an automatic precision sensor orientation tailored for high-resolution satellite imagery. The findings indicated a significant improvement in matching performance when using pan-sharpened images compared to original multispectral images. Lee and Kim (2021) investigated the effectiveness of using high-resolution GCP chips to establish a precise sensor orientation for medium-resolution satellite imagery. They used satellite image upsampling instead of pan-sharpening to improve the matching performance of medium-resolution satellite images without pan bands.
The study’s results indicated that upsampling to match at the medium resolution was more effective in terms of geometric accuracy than matching at the original resolution. Choi and Kim (2022a) conducted a follow-up study in which they adjusted the number of chips used to establish the precision sensor orientation during satellite image upsampling. This was done to reduce the time required to establish a medium-resolution automatic precision sensor orientation and improve GCP chip matching performance. Previous studies have shown that the performance of GCP chip matching is affected by the spatial resolution of both the GCP chip and satellite imagery.
The objective of this study is to develop an improved method for establishing automatic precision sensor orientation in high-resolution satellite images. To achieve this, we conducted chip matching by upsampling both the satellite and GCP chip images. We then performed a comprehensive comparative analysis of the results using the established precision sensor orientation. The study concluded that upsampling both the high-resolution satellite images and the GCP chip images can improve the performance of the sensor orientation when matching them.
This study utilized the pre-existing spatial information data of the Korean Peninsula, as described in a previous study (Park et al., 2020). The GCP chips for the South Korean region were created using precise aerial orthoimages with a spatial resolution of 0.25 m. It includes three different types of ground coordinate information (Yoon et al., 2018). These three types of ground coordinate information include the unified control point, triangulation point, and image control point. The Korea National Geographic Information Institute generates and manages the unified control point and triangulation point. The image control point is used in the production of the national basic map and aerial orthoimage.
In the case of North Korea, it is difficult to acquire and continuously manage GCP because it is inaccessible. Therefore, the GCP chips for the North Korean region were created using satellite orthoimages with a spatial resolution of 0.5 m. It includes orthoimagery plane coordinates and digital elevation model (DEM) ellipsoid height. Therefore, The GCP chips in the South Korean region were created from color images extracted from aerial orthoimagery. In contrast, the GCP chips in the North Korean region were created from single-band images extracted from satellite orthoimagery generated from panchromatic images. Table 1 provides detailed specifications of the GCP chips used. An example of the GCP chips is shown in Fig. 1. Fig. 1(a) shows GCP chips in South Korea, while Fig. 1(b) shows GCP chips in North Korea.
Table 1 . Specifications for GCP chip used.
Area | South Korea | North Korea |
---|---|---|
Ground coordinates reference | Unified control point (UCP), Image control point (ICP), Triangulation point (TP) | Plane coordinates of the orthogonal image, DEM ellipsoidal height |
Source image | Aerial orthoimage | Satellite orthoimage |
Chip size | 1,027 x 1,027 pixel | 513 x 513 pixel |
GSD | 0.25 m | 0.50 m |
Band | Red, Green, Blue | Gray |
GSD: ground sample distance..
The CAS500-1 image, a high-resolution satellite image with a spatial resolution of 0.5 meters, was utilized in this study. The specifications of the satellite images are shown in Table 2. The study area includes Sejong, Andong, and Pyongyang. Table 3 presents a list of the satellite images used in this study. Additionally, the study area’s geographic distribution is displayed in Figs. 2 to 4.
Table 2 . Specifications for satellite images used.
Satellite image | CAS500-1 |
---|---|
Product Level | Level 1R |
Spectral resolution | Panchromatic: 450–900 nm Multispectral: 450–900 nm (Blue, Green, Red, NIR) |
GSD | Panchromatic: 0.5 m Multispectral: 2.0 m |
Orbit | Circular sun-synchronous (500 km) |
Swath | ≥ 12km |
Radiometric resolution | 12 bits |
Table 3 . List of used satellite images.
Data | Acquisition date | Upper left coordinate (Lat, Lon) | Bottom low coordinate (Lat, Lon) | Quantity of GCP chip |
---|---|---|---|---|
Sejong 1 | 2021.12.07 | 36.509042465, 127.266395653 | 36.426378705, 127.427498911 | 56 |
Sejong 2 | 2021.12.07 | 36.612209951, 127.238683259 | 36.529558877, 127.400014661 | 40 |
Sejong 3 | 2022.02.27 | 36.625661023, 127.201747702 | 36.553286018, 127.406668861 | 68 |
Sejong 4 | 2022.03.03 | 36.533186804, 127.227716611 | 36.450030787, 127.390117310 | 55 |
Sejong 5 | 2023.03.16 | 36.506973164, 127.244234044 | 36.427195422, 127.414961162 | 58 |
Sejong 6 | 2023.03.16 | 36.609898079, 127.214833673 | 36.530137740, 127.385835309 | 40 |
Andong 1 | 2021.11.11 | 36.724915722, 128.875059965 | 36.642481614, 129.038039928 | 39 |
Andong 2 | 2022.06.12 | 36.619879282, 128.688020424 | 36.537777343, 128.850648011 | 36 |
Andong 3 | 2022.12.02 | 36.449640721, 128.893645792 | 36.368206234, 129.058636036 | 43 |
Pyongyang 1 | 2021.10.19 | 39.156498918, 125.645494344 | 39.073884439, 125.839485291 | 92 |
Pyongyang 2 | 2023.01.10 | 39.004960711, 125.710489853 | 38.922023330, 125.888358084 | 74 |
Fig. 5 presents a schematic of the procedure for establishing automatic precision sensor orientation using GCP chip matching, as applied in this paper.
Based on the findings of Shin et al. (2018), we conducted image fusion, also known as pan-sharpening, to enhance the matching accuracy between high-resolution satellite images and GCP chips. Pan-sharpening is a technology that combines high-resolution panchromatic band images with low-resolution multi-spectral band images to reconstruct high-resolution color images. This study utilized an algorithm based on the component-substitution (CS) fusion technique, as described in previous works (Park et al., 2020; Vivine et al., 2015).
This paper proposed establishing the initial sensor orientation first. The sensor orientation was established using the rational function model (RFM) for this study. The RFM is a mathematical sensor model mainly used for satellite imagery. It is a model expression that mathematically expresses the relationship between image coordinates and ground coordinates, without physical sensor information. It can be established by applying the rational polynomial coefficients (RPCs) provided with the original satellite image to the formula. Eq. (1) and (2) delineate the formulas of the RFM corresponding to the columns and rows of the image.
The formula provided uses cn and rn to represent the normalized image coordinates, and Xn, Yn, and Zn to denote the normalized ground coordinates. In the given equation, aijk, bijk, cijk, and dijk are the coefficients of the rational polynomials P1, P2, P3, and P4, respectively. These coefficients constitute the rational function expression defining the relationship between the image’s column, row, and ground coordinates. We set the values of the RPCs for i + j + k ≥ 4 to zero to formulate the RFM model expression using only the coefficients of the third term or lower in the ground coordinates. Additionally, we set b000 and d000 to 1 to fix the proportionality constant of the rational function expression. The remaining 78 coefficients for ground coordinate terms with i + j + k from 1 to 3 were provided as RPC files with satellite images.
Using the established initial sensor orientation, we conducted a comprehensive search for all GCP chips within the entire coverage of the satellite image. The image corner coordinates, derived from the initial sensor orientation, were used to calculate the minimum rectangular area encompassing the image region. To account for potential errors in the initial sensor orientation, a margin was incorporated into the image area. The margin dimensions were set at 250 meters in both the x and y directions of the satellite image. High-resolution satellites have a limited field of view and are affected by the area’s topography. Therefore, the margin size was determined based on the search for stable GCP chips and the potential for overestimation to establish accurate sensor orientation. The search for GCP chips was confined to the area within which the margin was applied, referencing the GCP database.
Subsequently, we defined the region surrounding each GCP chip and executed upsampling of the satellite image to the desired scale. Previous research has shown that bilinear interpolation is advantageous for upsampling satellite images in terms of both performance and time efficiency (Choi and Kim, 2022).
Therefore, we adopted bilinear interpolation for the upsampling process. In this study, we applied upsampling factors ranging from 1 to 3 times for each set of experimental data, resulting in spatial resolutions of 0.5 m, 0.25 m, and 0.167 m. The corresponding upsampling factors are shown in Fig. 6. Fig. 6 illustrates the original satellite image transformed according to the upsampling factor. It was evident that a larger upsampling factor leads to a smoother appearance of the image. To save processing speed and memory, the upsampling was only applied to the navigation area relative to the GCP, rather than the entire original satellite image.
Afterward, we upsampled and remapped the GCP chip image to match the geometry of the satellite image and its newly achieved spatial resolution. To ensure automatic matching using an area-based image-matching algorithm, we deformed the GCP chip to synchronize its resolution and geometry with that of the satellite image. Fig. 7 presents a depiction of the resampling process for the GCP chip, considering the geometry and resolution of the satellite image. This procedure was executed for all identified GCP chips.
Next, we created an image pyramid to improve processing efficiency for image matching. Four levels of image pyramids were constructed for each satellite image patch. The matching process began at the top layer of the pyramid and used the Census algorithm in the last layer and the zero-mean normalized cross-correlation (ZNCC) algorithm in the remaining layers, except for the last one. The previous study (Yoon, 2019) has shown that the ZNCC algorithm is robust to image size variations, but its accuracy in extracting matching points is somewhat lower. In contrast, the Census algorithm has relatively good accuracy in extracting matching points but is sensitive to variations in image size.
Therefore, we used the Census algorithm for the original image and the ZNCC algorithm for the other reduced images. Despite our efforts, there were still some mismatch points among the automatically matched GCPs. To address this issue, we utilized the random sample consensus (RANSAC) technique (Fischler and Bolles, 1981) to automatically remove them. The detailed process of GCP chip matching and mismatch point removal is omitted here for brevity and to avoid redundancy, as it has been covered in previous research papers (Yoon et al., 2018; Son et al., 2021).
As the final step, we established a precise sensor orientation based on the auto-matching results with removed mismatches. We estimated the coefficients of the following error correction equation to rectify any errors in the initial sensor orientation.
In the equation above, c0, r0, c, and r represent the image coordinates before and after correction, respectively. The coefficients of the error correction formula are represented by a11 to a23. After estimating the coefficients of the error correction equation, the RPC coefficients were recalculated using this equation to establish precise sensor orientation. The precision sensor orientation was established for each upsampling factor and then converted back to the original resolution for accuracy comparison (Lee and Kim, 2021). The accuracy of the precision sensor orientation was compared for each upsampling factor.
In this study, model error and check error were employed as performance evaluation metrics. Model error represents the relative root mean square error (rRMSE) measured for the GCPs used to refine the initial sensor model. Check error, on the other hand, is expressed as the rRMSE measured for checkpoints manually extracted from reference points not included in the model points. Eq. (5) details the calculation of rRMSE, where RMSEcol represents the RMSE in the column direction of the image, and RMSErow represents the RMSE in the row direction of the image. Accuracy analyses were conducted based on the original image with a spatial resolution of 0.5 m.
Table 4 shows the number of GCP chips within the acquisition area and the manually acquired number of GCP chips for verification across the 11 CAS500-1 satellite images used in the experiment.
Table 4 . Quantity of GCP chips per experimental case.
Data | Quantity of GCP chips | Quantity of checkpoint |
---|---|---|
Sejong 1 | 56 | 6 |
Sejong 2 | 40 | 6 |
Sejong 3 | 68 | 6 |
Sejong 4 | 55 | 6 |
Sejong 5 | 58 | 7 |
Sejong 6 | 40 | 6 |
Andong 1 | 39 | 5 |
Andong 2 | 36 | 4 |
Andong 3 | 43 | 5 |
Pyongyang 1 | 92 | 10 |
Pyongyang 2 | 74 | 8 |
The experimental results are presented in Table 5, and the performance evaluation indicator graphs for each study area are shown in Figs. 8 to 18, illustrating the observed trends in the experimental outcomes. In each figure, (a) and (b) represent the model error and check error, respectively, as functions of the number of GCPs and the upsampling scale.
Table 5 . Matching result.
Sejong 1 | Sejong 2 | |||||||
---|---|---|---|---|---|---|---|---|
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.285 | 1.546 | 45% | 0.500 | 1.198 | 1.125 | 50% | |
0.250 | 0.626 | 1.036 | 36% | 0.250 | 0.624 | 0.963 | 35% | |
0.167 | 0.403 | 0.738 | 34% | 0.167 | 0.423 | 1.043 | 38% | |
0.125 | 0.277 | 1.807 | 36% | 0.125 | 0.346 | 1.081 | 45% | |
Sejong 3 | Sejong 4 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.367 | 2.039 | 34% | 0.500 | 1.230 | 0.936 | 45% | |
0.250 | 0.739 | 1.419 | 25% | 0.250 | 0.605 | 0.632 | 36% | |
0.167 | 0.502 | 4.363 | 28% | 0.167 | 0.477 | 2.651 | 40% | |
0.125 | 0.352 | 3.590 | 22% | 0.125 | 0.324 | 0.881 | 33% | |
Sejong 5 | Sejong 6 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.433 | 1.573 | 41% | 0.500 | 1.312 | 2.225 | 55% | |
0.250 | 0.661 | 0.714 | 34% | 0.250 | 0.559 | 1.665 | 35% | |
0.167 | 0.465 | 2.233 | 33% | 0.167 | 0.386 | 2.387 | 43% | |
0.125 | 0.294 | 1.809 | 34% | 0.125 | 0.335 | 2.639 | 40% | |
Andong 1 | Andong 2 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.174 | 1.866 | 41% | 0.500 | 1.034 | 1.396 | 47% | |
0.250 | 0.666 | 1.476 | 36% | 0.250 | 0.600 | 1.211 | 39% | |
0.167 | 0.446 | 2.054 | 31% | 0.167 | 0.435 | 1.002 | 33% | |
0.125 | 0.328 | 1.773 | 44% | 0.125 | 0.325 | 1.549 | 31% | |
Andong 3 | Pyongyang 1 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.292 | 1.453 | 42% | 0.500 | 1.346 | 2.189 | 41% | |
0.250 | 0.654 | 1.340 | 37% | 0.250 | 0.705 | 1.860 | 27% | |
0.167 | 0.418 | 1.315 | 42% | 0.167 | 0.435 | 4.302 | 24% | |
0.125 | 0.304 | 2.349 | 28% | 0.125 | 0.347 | 3.141 | 21% | |
Pyongyang 2 | ||||||||
GSD (m) | Error (pixel) | Matching rate | ||||||
Model | Check | |||||||
0.500 | 1.477 | 1.932 | 35% | |||||
0.250 | 0.561 | 1.001 | 29% | |||||
0.167 | 0.438 | 1.979 | 26% | |||||
0.125 | 0.341 | 3.217 | 26% |
Part (a) of each experiment shows that the model error consistently decreased as the upsampling factor increased and the ground sample distance (GSD) became smaller. These results indicated that coefficient estimation was successfully applied in the error correction equation for the RPC update. It can be interpreted that the higher the upsampling factor was applied, the closer the GCPs were to the estimated error correction, the closer they were selected as model points, and consequently, the smaller the GSD compared to the original resolution, the smaller the model estimation error.
Part (b) of each experiment demonstrates a decrease in check error when upsampling by 2x from the original resolution for all experiments. However, when upsampling by 3x, the error rate generally increased again. The 4x results also showed inconsistent results for each experiment. This observation is consistent with previous studies, such as Lee and Kim (2021), which have shown that excessive upsampling scaling relative to the original data reduces model estimation error but does not necessarily result in improved accuracy.
When establishing the orientation of a precision sensor at the original resolution, the check error exceeded 2 pixels in many cases, with an average check error of 1.7 pixels. When establishing the orientation of a precision sensor using an upsampling factor of 2, the check error was less than 2 pixels in all experiments, with an average check error of 1.2 pixels. We achieved a 50% improvement over the original resolution. In several experiments, where the original resolution resulted in check errors of over 1 pixel, performance considerably improved when the 2x upsampling factor was applied, resulting in less than 1 pixel check error.
To validate the experimental results objectively, we compared the matching results for each resolution. Fig. 19 shows the results of the matching GCP chip. The matching results of the GCP chip at the original resolution of 0.5 m were roughly aligned to the resolution grid. However, the matching results of the GCP chip at 0.25 m upsampled were more precisely aligned. However, when upsampled to 0.167 meters, the GCP chip over-interpolates, resulting in a smoothed image that makes it difficult to find the exact match point.
In this chapter, we delve into the distinctions between our findings and those of preceding studies utilizing medium-resolution satellite imagery (Lee and Kim, 2021; Choi and Kim, 2022a). Previous research utilizing medium-resolution satellite imagery, with a resolution of 5 m and high-resolution GCP chips featuring a resolution of 0.25 m, identified an optimal upsampling ratio of 3 times, corresponding to a GSD of 1.67 m. In our present study, employing high-resolution satellite imagery at a resolution of 0.5 m along with a GCP chip sharing the same specifications, the optimal upsampling ratio was determined to be 2 times, equating to a GSD of 0.25 m.
In the case of North Korea, both the satellite image and the GCP chip had a resolution of 0.5 m. However, the results were still improved by upsampling to 0.25 m, which may be due to a limitation of the algorithm used for chip matching. The image-matching algorithm used in this study was a pixel-by-pixel similarity search method. In summary, when upsampled to 0.25 m, the similarity search interval was applied to 0.5 pixels of the original satellite image. This improvement was believed to be a result of this adjustment.
During the study, a decrease in accuracy was observed when the upsampling ratio was tripled, which deviates from the findings in low- and medium-resolution images. This phenomenon may be attributed to disparities in the sharpness of the original satellite image or the act of upsampling to a GSD higher than the chip resolution. Future research should investigate whether the clarity of the original image or the chip resolution has a greater impact on matching results. This is crucial for advancing our understanding and achieving higher accuracy in the context of high-resolution satellite images.
In this study, we applied upsampling to both satellite images and pre-existing high-resolution GCP chips to establish a highresolution satellite image precision sensor orientation. We subsequently conducted a comparative analysis of the matching performance. The experimental outcomes confirmed a notable enhancement in matching performance when upsampling was applied to both satellite images and GCP chips.
When establishing a precision sensor orientation with a 2x upsampling factor, an improvement of up to 50% was observed compared to the original resolution. In certain experimental outcomes, the check error approached or fell below 1 pixel. Due to technical constraints, achieving sub-pixel positioning accuracy when establishing precision sensor orientation on high-resolution satellite imagery at its original resolution was challenging. However, our results demonstrated that subpixel-level positioning accuracy can be achieved through meticulous upsampling of both satellite images and GCP chips.
Moreover, despite employing different specifications for GCP chips in the South and North Korean regions, the experimental results exhibited comparable trends. Through this, we confirmed that if it is possible to build GCP chips in inaccessible areas such as North Korea as well as South Korea, this method is sufficiently applicable, and it is possible to establish an improved precision sensor orientation.
In this study, we tested a GCP chip with a GSD similar to that of satellite images. However, if a more sophisticated GCP chip is built and utilized, the position error could be further reduced through the method proposed in this study. Therefore, further research using high-resolution and high-accuracy GCP chips is needed for precise sensor modeling of high-resolution satellite images. It is also necessary to consider the use of ultra-high-resolution data such as drones. We hope that this research will help in the processing and utilization of high-resolution satellite images.
This work was carried out with the support of “Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ 016233)” Rural Development Administration, Republic of Korea. This work is supported by the Korea Agency for Infrastructure Technology Advancement grant funded by the Ministry of Land, Infrastructure and Transport (Grant No. RS-2022-00155763).
No potential conflict of interest relevant to this article was reported.
Table 1 . Specifications for GCP chip used.
Area | South Korea | North Korea |
---|---|---|
Ground coordinates reference | Unified control point (UCP), Image control point (ICP), Triangulation point (TP) | Plane coordinates of the orthogonal image, DEM ellipsoidal height |
Source image | Aerial orthoimage | Satellite orthoimage |
Chip size | 1,027 x 1,027 pixel | 513 x 513 pixel |
GSD | 0.25 m | 0.50 m |
Band | Red, Green, Blue | Gray |
GSD: ground sample distance..
Table 2 . Specifications for satellite images used.
Satellite image | CAS500-1 |
---|---|
Product Level | Level 1R |
Spectral resolution | Panchromatic: 450–900 nm Multispectral: 450–900 nm (Blue, Green, Red, NIR) |
GSD | Panchromatic: 0.5 m Multispectral: 2.0 m |
Orbit | Circular sun-synchronous (500 km) |
Swath | ≥ 12km |
Radiometric resolution | 12 bits |
Table 3 . List of used satellite images.
Data | Acquisition date | Upper left coordinate (Lat, Lon) | Bottom low coordinate (Lat, Lon) | Quantity of GCP chip |
---|---|---|---|---|
Sejong 1 | 2021.12.07 | 36.509042465, 127.266395653 | 36.426378705, 127.427498911 | 56 |
Sejong 2 | 2021.12.07 | 36.612209951, 127.238683259 | 36.529558877, 127.400014661 | 40 |
Sejong 3 | 2022.02.27 | 36.625661023, 127.201747702 | 36.553286018, 127.406668861 | 68 |
Sejong 4 | 2022.03.03 | 36.533186804, 127.227716611 | 36.450030787, 127.390117310 | 55 |
Sejong 5 | 2023.03.16 | 36.506973164, 127.244234044 | 36.427195422, 127.414961162 | 58 |
Sejong 6 | 2023.03.16 | 36.609898079, 127.214833673 | 36.530137740, 127.385835309 | 40 |
Andong 1 | 2021.11.11 | 36.724915722, 128.875059965 | 36.642481614, 129.038039928 | 39 |
Andong 2 | 2022.06.12 | 36.619879282, 128.688020424 | 36.537777343, 128.850648011 | 36 |
Andong 3 | 2022.12.02 | 36.449640721, 128.893645792 | 36.368206234, 129.058636036 | 43 |
Pyongyang 1 | 2021.10.19 | 39.156498918, 125.645494344 | 39.073884439, 125.839485291 | 92 |
Pyongyang 2 | 2023.01.10 | 39.004960711, 125.710489853 | 38.922023330, 125.888358084 | 74 |
Table 4 . Quantity of GCP chips per experimental case.
Data | Quantity of GCP chips | Quantity of checkpoint |
---|---|---|
Sejong 1 | 56 | 6 |
Sejong 2 | 40 | 6 |
Sejong 3 | 68 | 6 |
Sejong 4 | 55 | 6 |
Sejong 5 | 58 | 7 |
Sejong 6 | 40 | 6 |
Andong 1 | 39 | 5 |
Andong 2 | 36 | 4 |
Andong 3 | 43 | 5 |
Pyongyang 1 | 92 | 10 |
Pyongyang 2 | 74 | 8 |
Table 5 . Matching result.
Sejong 1 | Sejong 2 | |||||||
---|---|---|---|---|---|---|---|---|
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.285 | 1.546 | 45% | 0.500 | 1.198 | 1.125 | 50% | |
0.250 | 0.626 | 1.036 | 36% | 0.250 | 0.624 | 0.963 | 35% | |
0.167 | 0.403 | 0.738 | 34% | 0.167 | 0.423 | 1.043 | 38% | |
0.125 | 0.277 | 1.807 | 36% | 0.125 | 0.346 | 1.081 | 45% | |
Sejong 3 | Sejong 4 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.367 | 2.039 | 34% | 0.500 | 1.230 | 0.936 | 45% | |
0.250 | 0.739 | 1.419 | 25% | 0.250 | 0.605 | 0.632 | 36% | |
0.167 | 0.502 | 4.363 | 28% | 0.167 | 0.477 | 2.651 | 40% | |
0.125 | 0.352 | 3.590 | 22% | 0.125 | 0.324 | 0.881 | 33% | |
Sejong 5 | Sejong 6 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.433 | 1.573 | 41% | 0.500 | 1.312 | 2.225 | 55% | |
0.250 | 0.661 | 0.714 | 34% | 0.250 | 0.559 | 1.665 | 35% | |
0.167 | 0.465 | 2.233 | 33% | 0.167 | 0.386 | 2.387 | 43% | |
0.125 | 0.294 | 1.809 | 34% | 0.125 | 0.335 | 2.639 | 40% | |
Andong 1 | Andong 2 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.174 | 1.866 | 41% | 0.500 | 1.034 | 1.396 | 47% | |
0.250 | 0.666 | 1.476 | 36% | 0.250 | 0.600 | 1.211 | 39% | |
0.167 | 0.446 | 2.054 | 31% | 0.167 | 0.435 | 1.002 | 33% | |
0.125 | 0.328 | 1.773 | 44% | 0.125 | 0.325 | 1.549 | 31% | |
Andong 3 | Pyongyang 1 | |||||||
GSD (m) | Error (pixel) | Matching rate | GSD (m) | Error (pixel) | Matching rate | |||
Model | Check | Model | Check | |||||
0.500 | 1.292 | 1.453 | 42% | 0.500 | 1.346 | 2.189 | 41% | |
0.250 | 0.654 | 1.340 | 37% | 0.250 | 0.705 | 1.860 | 27% | |
0.167 | 0.418 | 1.315 | 42% | 0.167 | 0.435 | 4.302 | 24% | |
0.125 | 0.304 | 2.349 | 28% | 0.125 | 0.347 | 3.141 | 21% | |
Pyongyang 2 | ||||||||
GSD (m) | Error (pixel) | Matching rate | ||||||
Model | Check | |||||||
0.500 | 1.477 | 1.932 | 35% | |||||
0.250 | 0.561 | 1.001 | 29% | |||||
0.167 | 0.438 | 1.979 | 26% | |||||
0.125 | 0.341 | 3.217 | 26% |
Jong-Hwan Son 1)· Wansang Yoon 1)· Taejung Kim 2),3)· Sooahm Rhee 4)†
Korean J. Remote Sens. 2021; 37(3): 431-447Jeonghee Lee, Kwangseob Kim, Kiwon Lee
Korean J. Remote Sens. 2024; 40(5): 445-453