Most Downloaded

  • October 31, 2023

    8 6
    Abstract
    In this study, we explored a method for assessing the extent of damage caused by chemical substances at an accident site through the use of a vegetation index. Data collection involved the deployment of two different drone types, and the damaged area was determined using photogrammetry technology from the 3D point cloud data. To create a vegetation index image, we utilized spectral band data from a multi-spectral sensor to generate an orthoimage. Subsequently, we conducted statistical analyses of the accident site with respect to the damaged area using a predefined threshold value. The Kappa values for the vegetation index, based on the near-infrared band and the green band, were found to be 0.79 and 0.76, respectively. These results suggest that the vegetation index-based approach for analyzing damage areas can be effectively applied in investigations of chemical accidents.
  • October 31, 2023

    8 6
    Abstract
    In this study, we investigated the effects of the spectral fitting window and absorption crosssection on the retrieval of the formaldehyde (HCHO) slant column density (SCD) from the direct-sun measurement of pandora spectrometer system using differential optical absorption spectroscopy (DOAS). Pandora Level 1 data observed at Yonsei University in Seoul from October 12 to 31, 2022 were used. The HCHO column density was retrieved under eight ranges including the spectral fitting window used in the Second Cabauw Intercomparison campaign for Nitrogen Dioxide measuring Instruments (CINDI-2) and seven types of absorption cross-section composition. The spectral fitting window was selected from 336.5 to 359.0 nm with minimum residual and HCHO SCD error. When the nitrogen dioxide (NO2) absorption cross-section at 220 K was added to the cross-section composition used in the CINDI-2 campaign among seven types, the residual and HCHO SCD error were the smallest and the HCHO column density was stably retrieved. The average HCHO SCD with the highest retrieval accuracy and the values retrieved under other conditions differed from a minimum of 4% to a maximum of 40%.
  • October 31, 2023

    17 6
    Abstract
    In high-density urban areas, the urban heat island effect increases urban temperatures, leading to negative impacts such as worsened air pollution, increased cooling energy consumption, and increased greenhouse gas emissions. In urban environments where it is difficult to secure additional green spaces, rooftop greening is an efficient greenhouse gas reduction strategy. In this study, we not only analyzed the current status of the urban heat island effect but also utilized high-resolution satellite data and spatial information to estimate the available rooftop greening area within the study area. We evaluated the mitigation effect of the urban heat island phenomenon and carbon sequestration capacity through temperature predictions resulting from rooftop greening. To achieve this, we utilized WorldView-2 satellite data to classify land cover in the urban heat island areas of Busan city. We developed a prediction model for temperature changes before and after rooftop greening using machine learning techniques. To assess the degree of urban heat island mitigation due to changes in rooftop greening areas, we constructed a temperature change prediction model with temperature as the dependent variable using the random forest technique. In this process, we built a multiple regression model to derive high-resolution land surface temperatures for training data using Google Earth Engine, combining Landsat-8 and Sentinel-2 satellite data. Additionally, we evaluated carbon sequestration based on rooftop greening areas using a carbon absorption capacity per plant. The results of this study suggest that the developed satellite-based urban heat island assessment and temperature change prediction technology using Random Forest models can be applied to urban heat island-vulnerable areas with potential for expansion.
  • ArticleAugust 31, 2023

    10 6
    Abstract
    Monitoring water surface has become one of the most prominent areas of research in addressing environmental challenges. Accurate and automated detection of water surface in remote sensing images is crucial for disaster prevention, urban planning, and water resource management, particularly for a country where water plays a vital role in human life. However, achieving precise detection poses challenges. Previous studies have explored different approaches, such as analyzing water indexes, like normalized difference water index (NDWI) derived from satellite imagery’s visible or infrared bands and using k-means clustering analysis to identify land cover patterns and segment regions based on similar attributes. Nonetheless, challenges persist, notably distinguishing between water spectral signatures and cloud shadow or terrain shadow. In this study, our objective is to enhance the precision of water surface detection by constructing a comprehensive water database (DB) using existing digital and land cover maps. This database serves as an initial assumption for automated water index analysis. We utilized 1:5,000 and 1:25,000 digital maps of Korea to extract water surface, specifically rivers, lakes, and reservoirs. Additionally, the 1:50,000 and 1:5,000 land cover maps of Korea aided in the extraction process. Our research demonstrates the effectiveness of utilizing a water DB product as our first approach for efficient water surface extraction from satellite images, complemented by our second and third approaches involving NDWI analysis and k-means analysis. The image segmentation and binary mask methods were employed for image analysis during the water extraction process. To evaluate the accuracy of our approach, we conducted two assessments using reference and ground truth data that we made during this research. Visual interpretation involved comparing our results with the global surface water (GSW) mask 60 m resolution, revealing significant improvements in quality and resolution. Additionally, accuracy assessment measures, including an overall accuracy of 90% and kappa values exceeding 0.8, further support the efficacy of our methodology. In conclusion, this study’s results demonstrate enhanced extraction quality and resolution. Through comprehensive assessment, our approach proves effective in achieving high accuracy in delineating water surfaces from satellite images.
  • ArticleJune 30, 2023

    7 6

    Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring

    Suho Bak 1) · Heung-Min Kim 2) · Tak-Young Kim3) · Jae-Young Lim4) · Seon Woong Jang 5)*

    Korean Journal of Remote Sensing 2023; 39(3): 297-309

    https://doi.org/10.7780/kjrs.2023.39.3.4

    Abstract
    The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learningbased object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.
  • ArticleApril 30, 2023

    9 6

    Semantic Segmentation of Clouds Using Multi-Branch Neural Architecture Search

    Chi Yoon Jeong 1) · Kyeong Deok Moon 1) · Mooseop Kim 1)*

    Korean Journal of Remote Sensing 2023; 39(2): 143-156

    https://doi.org/10.7780/kjrs.2023.39.2.2

    Abstract
    To precisely and reliably analyze the contents of the satellite imagery, recognizing the clouds which are the obstacle to gathering the useful information is essential. In recent times, deep learning yielded satisfactory results in various tasks, so many studies using deep neural networks have been conducted to improve the performance of cloud detection. However, existing methods for cloud detection have the limitation on increasing the performance due to the adopting the network models for semantic image segmentation without modification. To tackle this problem, we introduced the multi-branch neural architecture search to find optimal network structure for cloud detection. Additionally, the proposed method adopts the soft intersection over union (IoU) as loss function to mitigate the disagreement between the loss function and the evaluation metric and uses the various data augmentation methods. The experiments are conducted using the cloud detection dataset acquired by Arirang-3/3A satellite imagery. The experimental results showed that the proposed network which are searched network architecture using cloud dataset is 4% higher than the existing network model which are searched network structure using urban street scenes with regard to the IoU. Also, the experimental results showed that the soft IoU exhibits the best performance on cloud detection among the various loss functions. When comparing the proposed method with the state-of-the-art (SOTA) models in the field of semantic segmentation, the proposed method showed better performance than the SOTA models with regard to the mean IoU and overall accuracy.
  • ArticleOctober 31, 2022

    8 6

    Detection of Wildfire Smoke Plumes Using GEMS Images and Machine Learning

    Yemin Jeong1)·Seoyeon Kim1)·Seung-Yeon Kim2)· Jeong-Ah Yu3)·Dong-Won Lee4)·Yangwon Lee 5)†

    Korean Journal of Remote Sensing 2022; 38(5): 967-977

    https://doi.org/10.7780/kjrs.2022.38.5.3.13

    Abstract
    The occurrence and intensity of wildfires are increasing with climate change. Emissions from forest fire smoke are recognized as one of the major causes affecting air quality and the greenhouse effect. The use of satellite product and machine learning is essential for detection of forest fire smoke. Until now, research on forest fire smoke detection has had difficulties due to difficulties in cloud identification and vague standards of boundaries. The purpose of this study is to detect forest fire smoke using Level 1 and Level 2 data of Geostationary Environment Monitoring Spectrometer (GEMS), a Korean environmental satellite sensor, and machine learning. In March 2022, the forest fire in Gangwon- do was selected as a case. Smoke pixel classification modeling was performed by producing wildfire smoke label images and inputting GEMS Level 1 and Level 2 data to the random forest model. In the trained model, the importance of input variables is Aerosol Optical Depth (AOD), 380 nm and 340 nm radiance difference, Ultra-Violet Aerosol Index (UVAI), Visible Aerosol Index (VisAI), Single Scattering Albedo (SSA), formaldehyde (HCHO), nitrogen dioxide (NO2), 380 nm radiance, and 340 nm radiance were shown in that order. In addition, in the estimation of the forest fire smoke probability (0 ≤ p ≤ 1) for 2,704 pixels, Mean Bias Error (MBE) is –0.002, Mean Absolute Error (MAE) is 0.026, Root Mean Square Error (RMSE) is 0.087, and Correlation Coefficient (CC) showed an accuracy of 0.981.
  • ArticleOctober 31, 2022

    11 6

    An Artificial Intelligence Approach to Waterbody Detection of the Agricultural Reservoirs in South Korea Using Sentinel-1 SAR Images

    Soyeon Choi1)·Youjeong Youn2)·Jonggu Kang1)·Ganghyun Park1)·Geunah Kim1)· Seulchan Lee3)·Minha Choi4)·Hagyu Jeong5)·Yangwon Lee 6)†

    Korean Journal of Remote Sensing 2022; 38(5): 925-938

    https://doi.org/10.7780/kjrs.2022.38.5.3.10

    Abstract
    Agricultural reservoirs are an important water resource nationwide and vulnerable to abnormal climate effects such as drought caused by climate change. Therefore, it is required enhanced management for appropriate operation. Although water-level tracking is necessary through continuous monitoring, it is challenging to measure and observe on-site due to practical problems. This study presents an objective comparison between multiple AI models for water-body extraction using radar images that have the advantages of wide coverage, and frequent revisit time. The proposed methods in this study used Sentinel-1 Synthetic Aperture Radar (SAR) images, and unlike common methods of water extraction based on optical images, they are suitable for long-term monitoring because they are less affected by the weather conditions. We built four AI models such as Support Vector Machine (SVM), Random Forest (RF), Artificial Neural Network (ANN), and Automated Machine Learning (AutoML) using drone images, sentinel-1 SAR and DSM data. There are total of 22 reservoirs of less than 1 million tons for the study, including small and medium-sized reservoirs with an effective storage capacity of less than 300,000 tons. 45 images from 22 reservoirs were used for model training and verification, and the results show that the AutoML model was 0.01 to 0.03 better in the water Intersection over Union (IoU) than the other three models, with Accuracy=0.92 and mIoU=0.81 in a test. As the result, AutoML performed as well as the classical machine learning methods and it is expected that the applicability of the water-body extraction technique by AutoML to monitor reservoirs automatically.
  • ArticleOctober 31, 2022

    10 6

    A Study on Transferring Cloud Dataset for Smoke Extraction Based on Deep Learning

    Jiyong Kim 1)·Taehong Kwak 2)·Yongil Kim 3)†

    Korean Journal of Remote Sensing 2022; 38(5): 695-706

    https://doi.org/10.7780/kjrs.2022.38.5.2.4

    Abstract
    Medium and high-resolution optical satellites have proven their effectiveness in detecting wildfire areas. However, smoke plumes generated by wildfire scatter visible light incidents on the surface, thereby interrupting accurate monitoring of the area where wildfire occurs. Therefore, a technology to extract smoke in advance is required. Deep learning technology is expected to improve the accuracy of smoke extraction, but the lack of training datasets limits the application. However, for clouds, which have a similar property of scattering visible light, a large amount of training datasets has been accumulated. The purpose of this study is to develop a smoke extraction technique using deep learning, and the limits due to the lack of datasets were overcome by using a cloud dataset on transfer learning. To check the effectiveness of transfer learning, a small-scale smoke extraction training set was made, and the smoke extraction performance was compared before and after applying transfer learning using a public cloud dataset. As a result, not only the performance in the visible light wavelength band was enhanced but also in the near infrared (NIR) and short-wave infrared (SWIR). Through the results of this study, it is expected that the lack of datasets, which is a critical limit for using deep learning on smoke extraction, can be solved, and therefore, through the advancement of smoke extraction technology, it will be possible to present an advantage in monitoring wildfires.
  • ArticleOctober 31, 2022

    8 6

    A Study on the Cloud Detection Technique of Heterogeneous Sensors Using Modified DeepLabV3+

    Mi-Jeong Kim 1)† · Yun-Ho Ko 2)

    Korean Journal of Remote Sensing 2022; 38(5): 511-521

    https://doi.org/10.7780/kjrs.2022.38.5.1.6

    Abstract
    Cloud detection and removal from satellite images is an essential process for topographic observation and analysis. Threshold-based cloud detection techniques show stable performance because they detect using the physical characteristics of clouds, but they have the disadvantage of requiring all channels’ images and long computational time. Cloud detection techniques using deep learning, which have been studied recently, show short computational time and excellent performance even using only four or less channel (RGB, NIR) images. In this paper, we confirm the performance dependence of the deep learning network according to the heterogeneous learning dataset with different resolutions. The DeepLabV3+ network was improved so that channel features of cloud detection were extracted and learned with two published heterogeneous datasets and mixed data respectively. As a result of the experiment, clouds’ Jaccard index was low in a network that learned with different kind of images from test images. However, clouds’ Jaccard index was high in a network learned with mixed data that added some of the same kind of test data. Clouds are not structured in a shape, so reflecting channel features in learning is more effective in cloud detection than spatial features. It is necessary to learn channel features of each satellite sensors for cloud detection. Therefore, cloud detection of heterogeneous sensors with different resolutions is very dependent on the learning dataset.
KSRS
August 2024 Vol. 40, No. 4, pp. 319-418

Most Keyword ?

What is Most Keyword?

  • It is the most frequently used keyword in articles in this journal for the past two years.

Most View

Editorial Office

Korean Journal of Remote Sensing