Related Articles

  • December 31, 2016

    0 17 6

    Comparison of Match Candidate Pair Constitution Methods for UAV Images Without Orientation Parameters

    Jongwon Jung*, Taejung Kim*†, Jaein Kim* and Sooahm Rhee**

    Korean Journal of Remote Sensing 2016; 32(6): 647-656

    https://doi.org/10.7780/kjrs.2016.32.6.9

    Abstract
    Growth of UAV technology leads to expansion of UAV image applications. Many UAV image-based applications use a method called incremental bundle adjustment. However, incremental bundle adjustment produces large computation overhead because it attempts feature matching from all image pairs. For efficient feature matching process we have to confine matching only for overlapping pairs using exterior orientation parameters. When exterior orientation parameters are not available, we cannot determine overlapping pairs. We need another methods for feature matching candidate constitution. In this paper we compare matching candidate constitution methods without exterior orientation parameters, including partial feature matching, Bag-of-keypoints, image intensity method. We use the overlapping pair determination method based on exterior orientation parameter as reference. Experiment results showed the partial feature matching method in the one with best efficiency.
  • December 31, 2017

    0 21 17
    Abstract
    Measuring tree‘s volume is very important input data of various environmental analysis modeling However, It’s difficult to use economical and equipment to measure a fragmented small green space in the city. In addition, Trees are sensitive to seasons, so we need new and easier equipment and quantification methods for measuring trees than lidar for high frequency monitoring. In particular, the tree’s size in a city affect management costs, ecosystem services, safety, and so need to be managed and informed on the individual tree-based. In this study, we aim to acquire image data with UAV(Unmanned Aerial Vehicle), which can be operated at low cost and frequently, and quickly and easily quantify a single tree using SfM-MVS(Structure from Motion-Multi View Stereo), and we evaluate the impact of reducing number of images on the point density of point clouds generated from SfM-MVS and the quantification of single trees. Also, We used the Watertight model to estimate the volume of a single tree and to shape it into a 3D structure and compare it with the quantification results of 3 different type of 3D models. The results of the analysis show that UAV, SfM-MVS and solid model can quantify and shape a single tree with low cost and high time resolution easily. This study is only for a single tree, Therefore, in order to apply it to a larger scale, it is necessary to follow up research to develop it, such as convergence with various spatial information data, improvement of quantification technique and flight plan for enlarging green space.
  • April 30, 2018

    0 12 6

    Development of Image-map Generation and Visualization System Based on UAV for Real-time Disaster Monitoring

    Jangwoo Cheon*, Kyoungah Choi* and Impyeong Lee*†

    Korean Journal of Remote Sensing 2018; 34(2): 407-418

    https://doi.org/10.7780/kjrs.2018.34.2.2.8

    Abstract
    The frequency and risk of disasters are increasing due to environmental and social factors. In order to respond effectively to disasters that occur unexpectedly, it is very important to quickly obtain up-to-date information about target area. It is possible to intuitively judge the situation about the area through the image-map generated at high speed, so that it can cope with disaster quickly and effectively. In this study, we propose an image-map generation and visualization system from UAV images for realtime disaster monitoring. The proposed system consists of aerial segment and ground segment. In the aerial segment, the UAV system acquires the sensory data from digital camera and GPS/IMU sensor. Communication module transmits it to the ground server in real time. In the ground segment, the transmitted sensor data are processed to generate image-maps and the image-maps are visualized on the geo-portal. We conducted experiment to check the accuracy of the image-map using the system. Check points were obtained through ground survey in the data acquisition area. When calculating the difference between adjacent image maps, the relative accuracy was 1.58 m. We confirmed the absolute accuracy of the image map for the position measured from the individual image map. It is confirmed that the map is matched to the existing map with an absolute accuracy of 0.75 m. We confirmed the processing time of each step until the visualization of the image-map. When the image-map was generated with GSD 10 cm, it took 1.67 seconds to visualize. It is expected that the proposed system can be applied to real - time monitoring for disaster response.
  • October 31, 2019

    0 19 9

    A Study on the Improvement of UAV based 3D Point Cloud Spatial Object Location Accuracy using Road Information

    Jaehee Lee 1) · Jihun Kang 1)† · Sewon Lee2)

    Korean Journal of Remote Sensing 2019; 35(5): 705-714

    https://doi.org/10.7780/kjrs.2019.35.5.1.7

    Abstract
    Precision positioning is necessary for various use of high-resolution UAV images. Basically, GCP is used for this purpose, but in case of emergency situations or difficulty in selecting GCPs, the data shall be obtained without GCPs. This study proposed a method of improving positional accuracy for x, y coordinate of UAV based 3 dimensional point cloud data generated without GCPs. Road vector file by the public data (Open Data Portal) was used as reference data for improving location accuracy. The geometric correction of the 2 dimensional ortho-mosaic image was first performed and the transform matrix produced in this process was adopted to apply to the 3 dimensional point cloud data. The straight distance difference of 34.54 m before the correction was reduced to 1.21 m after the correction. By confirming that it is possible to improve the location accuracy of UAV images acquired without GCPs, it is expected to expand the scope of use of 3 dimensional spatial objects generated from point cloud by enabling connection and compatibility with other spatial information data.
  • December 31, 2019

    0 29 5

    A Study on Data Acquisition in the Invisible Zone of UAV through LTE Remote Control

    HoHyun Jeong 1)·Jaehee Lee 1)†·Seongjin Park2)

    Korean Journal of Remote Sensing 2019; 35(6): 987-997

    https://doi.org/10.7780/kjrs.2019.35.6.1.9

    Abstract
    Recently the demand for drones is rapidly increasing, as developing Unmanned Aerial Vehicle (UAV) and growing interest in them. Compared to traditional satellite and aerial imagery, it can be used for various researches (environment, geographic information, ocean observation, and remote sensing) because it can be managed with low operating costs and effective data acquisition. However, there is a disadvantage in that only a small area is acquired compared to the satellite and an aircraft, which is a traditional remote sensing method, depending on the battery capacity of the UAV, and the distance limit between Ground Control System (GCS) and UAV. If remote control at long range is possible, the possibility of using UAV in the field of remote sensing can be increased. Therefore, there is a need for a communication network system capable of controlling regardless of the distance between the UAV and the GCS. The distance between UAV and GCS can be transmitted and received using simple radio devices (RF 2.4 GHz, 915 MHz, 433 MHz), which is limited to around 2 km. If the UAV can be managed simultaneously by improving the operating environment of the UAV using a Long- Term Evolution (LTE) communication network, it can make greater effects by converging with the existing industries. In this study, we performed the maximum straight-line distance 6.1 km, the test area 2.2 km2, and the total flight distance 41.75 km based on GCS through LTE communication. In addition, we analyzed the possibility of disconnected communication through the base station of LTE communication.
  • December 31, 2019

    0 20 4

    Derivation and Evaluation of Surface Reflectance from UAV Multispectral Image for Monitoring Forest Vegetation

    Hwa-Seon Lee 1)†·Won-Woo Seo2)·Choongshik Woo3)·Kyu-Sung Lee4)

    Korean Journal of Remote Sensing 2019; 35(6): 1149-1160

    https://doi.org/10.7780/kjrs.2019.35.6.2.10

    Abstract
    In this study, two radiometric correction methods deriving reflectance from UAV multispectral image for monitoring forest vegetation were applied and evaluated. Multispectral images were obtained from a small multispectral camera having 5 spectral bands. Reflectance were derived by applying the two methods: (1) the direct method using downwelling irradiance measurement and (2) the empirical line correction method by linking a set of field reflectance measured simultaneous with the image capture. Field reflectance were obtained using a spectroradiometer during the flight and used for building the linear equation for the empirical method and for the validation of image reflectance derived. Although both methods provided the high correlations between field reflectance and image-derived reflectance, their distributions were somewhat different. While the direct method provided rather stable and consistent distribution of reflectance all over the entire image area, the empirical method showed very unstable and inconsistent reflectance distribution. The direct method would be more appropriate for relatively wide area that requires more time to acquire image and may vary in downwelling irradiance and atmospheric conditions.
  • October 31, 2020

    0 21 7

    Diurnal Change of Reflectance and Vegetation Index from UAV Image in Clear Day Condition

    Kyung-do Lee 1)· Sang-il Na 1)· Chan-won Park2)· Suk-young Hong2)· Kyu-ho So2)· Ho-yong Ahn 1)†

    Korean Journal of Remote Sensing 2020; 36(5): 735-747

    https://doi.org/10.7780/kjrs.2020.36.5.1.7

    Abstract
    Recent advanced UAV (Unmanned Aerial Vehicle) technology supply new opportunities for estimating crop condition using high resolution imagery. We analyzed the diurnal change of reflectance and NDVI (Normalized Difference Vegetation Index) in UAV imagery for crop monitoring in clear day condition. Multi-spectral images were obtained from a 5-band multi-spectral camera mounted on rotary wing UAV. Reflectance were derived by the direct method using down-welling irradiance measurement. Reflectance using UAV imagery on calibration tarp, concrete and crop experimental sites did not show stable by time and daily reproducible values. But the CV (Coefficient of Variation) of diurnal NDVI on crop experimental sites was less than 5%. As a result of comparing NDVI at the similar time for two day, the daily mean average ratio of error showed a difference of 0.62 to 3.97%. Therefore, it is considered that NDVI using UAV imagery can be used for time series crop monitoring.
  • December 31, 2020

    0 14 6
    Abstract
    In this study, a cloud-based processing method using Agisoft Metashape, a commercial software, and Amazon web service, a cloud computing service, is introduced and evaluated to quickly generate high-precision 3D realistic data from large volume UAV images acquired in disaster sites. Compared with on-premises method using a local computer and cloud services provided by Agisoft and Pix4D, the processes of aerial triangulation, 3D point cloud and DSM generation, mesh and texture generation, ortho-mosaic image production recorded similar time duration. The cloud method required uploading and downloading time for large volume data, but it showed a clear advantage that in situ processing was practically possible. In both the on-premises and cloud methods, there is a difference in processing time depending on the performance of the CPU and GPU, but not so much as in a performance benchmark. However, it was found that a laptop computer equipped with a low-performance GPU takes too much time to apply to in situ processing.
  • December 31, 2020

    0 25 6

    Automatic Change Detection Using Unsupervised Saliency Guided Method with UAV and Aerial Images

    Mohammad Gholami Farkoushi1)· Yoonjo Choi1)· Seunghwan Hong2)· Junsu Bae1)· Hong-Gyoo Sohn 3)†

    Korean Journal of Remote Sensing 2020; 36(5): 1067-1076

    https://doi.org/10.7780/kjrs.2020.36.5.3.6

    Abstract
    In this paper, an unsupervised saliency guided change detection method using UAV and aerial imagery is proposed. Regions that are more different from other areas are salient, which make them more distinct. The existence of the substantial difference between two images makes saliency proper for guiding the change detection process. Change Vector Analysis (CVA), which has the capability of extracting of overall magnitude and direction of change from multi-spectral and temporal remote sensing data, is used for generating an initial difference image. Combined with an unsupervised CVA and the saliency, Principal Component Analysis (PCA), which is possible to implemented as the guide for change detection method, is proposed for UAV and aerial images. By implementing the saliency generation on the difference map extracted via the CVA, potentially changed areas obtained, and by thresholding the saliency map, most of the interest areas correctly extracted. Finally, the PCA method is implemented to extract features, and K-means clustering is applied to detect changed and unchanged map on the extracted areas. This proposed method is applied to the image sets over the flooded and typhoon-damaged area and is resulted in 95 percent better than the PCA approach compared with manually extracted ground truth for all the data sets. Finally, we compared our approach with the PCA K-means method to show the effectiveness of the method.
  • December 31, 2020

    0 19 4

    Analysis of UAV-based Multispectral Reflectance Variability for Agriculture Monitoring

    Ho-yong Ahn 1)· Sang-il Na1)· Chan-won Park2)· Suk-young Hong2)· Kyu-ho So2)· Kyung-do Lee 1)†

    Korean Journal of Remote Sensing 2020; 36(6): 1379-1391

    https://doi.org/10.7780/kjrs.2020.36.6.1.8

    Abstract
    UAV in the agricultural application are capable of collecting ultra-high resolution image. It is possible to obtain timeliness images for phenological phases of the crop. However, the UAV uses a variety of sensors and multi-temporal images according to the environment. Therefore, it is essential to use normalized image data for time series image application for crop monitoring. This study analyzed the variability of UAV reflectance and vegetation index according to Aviation Image Making Environment to utilize the UAV multispectral image for agricultural monitoring time series. The variability of the reflectance according to environmental factors such as altitude, direction, time, and cloud was very large, ranging from 8% to 11%, but the vegetation index variability was stable, ranging from 1% to 5%. This phenomenon is believed to have various causes such as the characteristics of the UAV multispectral sensor and the normalization of the post-processing program. In order to utilize the time series of unmanned aerial vehicles, it is recommended to use the same ratio function as the vegetation index, and it is recommended to minimize the variability of time series images by setting the same time, altitude and direction as possible.
KSRS
December 2024 Vol. 40, No. 6, pp. 881-1521

Most Keyword ?

What is Most Keyword?

  • It is the most frequently used keyword in articles in this journal for the past two years.

Most View

Editorial Office

Korean Journal of Remote Sensing