Research Article

Split Viewer

Korean J. Remote Sens. 2025; 41(1): 209-223

Published online: February 28, 2025

https://doi.org/10.7780/kjrs.2025.41.1.17

© Korean Society of Remote Sensing

High-Accuracy Tree Type Classification in Urban Forests Using Drone-Based RGB Imagery and Optimized SVM

Won-Ki Jo1 , Jong-Hwa Park2*

1Master Student, Department of Agricultural and Rural Engineering, Chungbuk National University, Cheongju, Republic of Korea
2Professor, Department of Agricultural and Rural Engineering, Chungbuk National University, Cheongju, Republic of Korea

Correspondence to : Jong-Hwa Park
E-mail: jhpak7@cbnu.ac.kr

Received: January 14, 2025; Revised: February 8, 2025; Accepted: February 12, 2025

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Accurate and efficient tree type classification in urban forests is crucial for effective management, informed policy decisions, and enhancing urban resilience, particularly with increasing urbanization and climate change. This study developed and evaluated a practical methodology for classifying coniferous and broadleaf trees in the Chungbuk National University Arboretum, South Korea. The study utilized drone-acquired, high-resolution RGB imagery and a Support Vector Machine (SVM) classifier. The workflow encompassed drone image acquisition, concurrent ground truth data collection, image preprocessing, feature extraction (including RGB color bands and Gray-Level Co-occurrence Matrix [GLCM] texture features [TFs]), and SVM model training, optimization, and evaluation. Different SVM kernels (Linear, RBF, Polynomial, Sigmoid) and feature combinations were investigated to optimize model performance, with a specific focus on processing time for practical application. Results indicated that RGB color bands were the primary drivers of accurate classification, while most GLCM TFs provided minimal additional benefit in this specific context. The RBF kernel, with optimized hyperparameters (C=10, γ=0.01), achieved the highest overall accuracy (99%) and F1-score (0.99), while the Linear kernel provided similar accuracy but with a longer processing time. Notably, the drone-based classification significantly outperformed the outdated Korea Forest Service forest map in representing the current forest composition, highlighting the limitations of traditional mapping methods for dynamic urban environments. This research contributes a cost-effective and accurate method for urban forest assessment, demonstrating the value of drone technology and readily available RGB imagery. The entire process, from image acquisition to classification, was completed in approximately 12 hours, showcasing its efficiency. Although this study focused on only two tree types in a single season, the developed methodology shows potential for broader application in classifying a wider range of species and informing management practices across different seasons by considering the phenological stages of trees. The proposed approach provides urban planners and forest managers with a valuable tool for enhancing ecosystem services and improving the quality of life in urban areas. This study underscores the potential of drone technology to revolutionize urban forest monitoring and management practices, paving the way for more sustainable and informed decisionmaking, particularly in rapidly urbanizing regions.

Keywords Urban forestry, Drone, Remote sensing, Support vector machine, Tree type classification

Urban forests are vital components of sustainable urban ecosystems, delivering essential services like climate regulation, air purification, and enhanced human well-being (Dwyer et al., 1992; Nowak et al., 2006). These benefits are increasingly crucial as urbanization intensifies and the impacts of climate change become more pronounced (Ma et al., 2021). Effective management of urban forests depends on accurate and timely information regarding tree species composition and distribution. Such data are crucial for informed policy formulation, ecological modeling, urban planning, and assessing carbon sequestration potential in line with carbon neutrality goals (Nowak and Dwyer, 2007; Park Green Space Act, 2020). However, acquiring detailed and up-to-date information on urban tree species remains a challenge due to the dynamic nature of urban environments and limitations in traditional assessment methods (Velasquez-Camacho et al., 2021).

In South Korea, urban forests are formally recognized for their ecological and social importance under the Act on the Creation and Management of Urban Forests, Etc., including diverse green spaces like parks, roadside plantings, and trees within residential areas (Korea Forest Service, 2013). These urban forests play a significant role in enhancing the quality of life for urban residents, contributing to environmental sustainability and human well-being (Park Green Space Act, 2020). While national and local initiatives promote urban green space expansion, a research gap exists in developing objective, systematic, and efficient techniques for their management and monitoring, particularly for classifying tree types at a scale relevant for practical management decisions (Gu et al., 2020). This gap highlights the need for advanced methods that can accurately assess and monitor urban forest resources to support effective decision-making and sustainable urban development.

Traditional methods for obtaining tree species information, such as field surveys and aerial photograph interpretation, often employed in national forest inventories, are labor-intensive, time-consuming, and may lack the spatial and temporal resolution needed for detailed assessments of dynamic urban environments (Velasquez-Camacho et al., 2021). Moreover, the complex mosaic of land use and diverse vegetation types in urban areas further complicates accurate species identification using these conventional approaches (Wang et al., 2021). For instance, traditional forest maps may not fully capture the heterogeneity of tree species within small urban parks or along roadsides, and they often lack information on individual tree health and condition. This limitation hinders effective urban forest management, which requires fine-grained information to address specific needs and challenges within heterogeneous urban landscapes.

The emergence of Unmanned Aerial Vehicles (UAVs), or drones, as a high-resolution remote sensing platform, combined with advancements in Artificial Intelligence (AI), offers a transformative opportunity for efficient and cost-effective tree species classification in urban environments (Chehreh et al., 2023; Guo et al., 2022; Lee et al., 2022). UAVs provide the flexibility to acquire high-resolution data even in complex urban settings, overcoming the spatial and temporal limitations of traditional methods (Jeong et al., 2024; Shivaprakash et al., 2022). Drone-based RGB imagery, in particular, presents a highly promising avenue due to its affordability, accessibility, and ability to capture fine-scale details of tree crowns. These details can be used to extract textural and morphological features vital for species differentiation (Fassnacht et al., 2016; Gu et al., 2020).

This study leverages the potential of drone-based RGB imagery and machine learning to address the research gap in urban tree species classification. We propose a framework utilizing an SVM classifier and drone-acquired RGB imagery to classify two common urban tree types in South Korea - coniferous and broadleaf trees - within a confined urban area in Cheongju-si. Cheongju-si is a rapidly urbanizing region with approximately 10,637 ha of urban forest, making it a relevant case study for exploring innovative urban forest monitoring techniques (Park Green Space Act, 2020). The study investigates the influence of texture features, derived from Gray-Level Co-occurrence Matrix (GLCM) analysis, on classification accuracy. It also evaluates the performance of different SVM kernel types (linear, polynomial, radial basis function [RBF], and sigmoid) to identify the most effective configuration for this specific application.

Our research addresses several key questions: A) How effective is drone-based RGB imagery, combined with SVM classification, for accurately classifying coniferous and broadleaf trees in a complex urban environment? B) Do GLCM-derived texture features significantly enhance classification accuracy in this context, or are RGB color bands alone sufficient for reliable classification? C) What is the optimal SVM kernel type for this classification task, considering both accuracy and computational efficiency? D) How do the results of our drone-based classification compare with existing forest maps, and what advantages does this approach offer for urban forest management?

By addressing these questions, this study aims to contribute to the development of practical and accessible tools for urban forest monitoring and management in South Korea and other rapidly urbanizing regions.

2.1. Study Area

This study was conducted at the Chungbuk National University Arboretum, a 2.5-hectare site located in Cheongju-si, Chungcheongbuk-do, Republic of Korea (36°37′44″N, 127°27′13″E) (Fig. 1). The arboretum serves as a research and educational site for forest science and is representative of the expanding urban forests within the region. The arboretum’s forest is mixed, with dominant broad-leaved species including Quercus acutissima and Cornus controversa, and coniferous such as Pinus densiflora and Pinus strobus. A 1,400 m wooden deck walking trail is integrated into the arboretum, designed to minimize tree removal and thus maintain continuous canopy cover. This feature enhances the site’s suitability for studying urban forest structure using drone imagery. The diverse tree species composition, coupled with the arboretum’s location within a rapidly urbanizing region, makes it an ideal location for investigating the effectiveness of dronebased remote sensing for urban forest management.

Fig. 1. Location and characteristics of the study area, the Chungbuk National University Arboretum in Cheongju-si, South Korea.

The arboretum was selected as the study site to address the limited research on tree species composition within these urban forests. It provides a controlled yet representative setting for applying novel remote sensing techniques. Of particular note is the arboretum’s status as the only known Korean habitat for the native Cladrastis platycarpa (Maxim.) Makino, highlighting its value for conservation.

2.2. Research Process and Methods

This study aimed to develop an accurate and efficient method for classifying tree types (coniferous and broadleaf) in urban forests using drone-acquired RGB imagery. The research process involved a systematic workflow that integrated drone image acquisition, in-situ data collection, image processing, feature extraction, and model development (Fig. 2).

Fig. 2. Workflow for tree type classification using drone RGB imagery and SVM classifier.

Data Collection: Drone image acquisition and field surveys were conducted concurrently to obtain high-resolution RGB imagery and identify corresponding coniferous and broadleaf tree crowns within the study area.

Image Processing and Segmentation: Image processing involved generating an orthomosaic from the drone imagery to create a geometrically accurate representation of the study area. Individual tree crowns were then segmented from the orthomosaic, and these segmented crowns were labeled as either coniferous or broadleaf based on the field survey data.

Feature Extraction: Feature extraction focused on calculating GLCM-TFs from the segmented RGB imagery. Eight statistical texture features—Mean (MN), Variance (VE), Homogeneity (HY), Contrast (CT), Angular Second Moment (ASM), Dissimilarity (DY), Entropy (EY), and Correlation (CN)—were extracted from each crown to capture subtle differences in morphology and color patterns (Haralick et al., 1973).

SVM Classification and Model Development: An SVM classifier was trained using the RGB values and GLCM-TFs as input data. Model development included hyperparameter tuning via a grid search to optimize model parameters, 5-fold crossvalidation to ensure robustness and minimize overfitting, and performance analysis using a confusion matrix to evaluate classification accuracy. The aim was to identify the most effective combination of features and SVM parameters for accurate treetype classification.

Key Features of Methodology: This study contributes a practical methodology for urban forest assessment using readily available drone-acquired RGB imagery. The focus on readily available RGB data and a streamlined workflow using an SVM classifier, instead of a computationally intensive Convolutional Neural Network (CNN), is a key feature of this research.

2.3. Drone Image Acquisition

High-resolution RGB imagery was acquired for this study using a DJI Matrice 300 RTK rotary-wing drone equipped with a Zenmuse H20T RGB sensor (DJI, Shenzhen, China). This imagery served as the primary data source for tree type classification, leveraging the ability of drone-based sensors to capture detailed information about tree crown morphology and color.

Data acquisition was conducted on May 10, 2024, at 10:00 AM under favorable weather conditions, with clear skies, minimal wind (less than 2 m/s), a temperature of 18.5°C, and 47% humidity. The flight followed a double-grid pattern to ensure comprehensive coverage of the 2.5 ha study area at Chungbuk National University Arboretum, located in Cheongju-si, Chungcheongbuk-do, Republic of Korea (36°37′44″N, 127°27′13″E) (Fig. 1).

Image overlap was set to 70% along the flight path and 80% between flight lines to facilitate accurate orthomosaic generation. The drone was flown at an altitude of 80 meters above ground level, maintaining a speed of 3.9 m/s. These flight parameters resulted in a Ground Sampling Distance (GSD) of 2.7 cm/px, providing high-resolution imagery suitable for detailed analysis of individual tree crowns. The acquisition of high-resolution imagery that allows for the analysis of individual tree crowns is a key feature of this study’s approach to tree type classification.

2.4. Image Preprocessing and Orthomosaic Generation

The drone-acquired RGB images were processed using Pix4DMapper V4.8.4 software (Pix4D SA, Lausanne, Switzerland) to generate a geometrically corrected orthomosaic. This orthomosaic provided a seamless and accurate spatial representation of the study area, essential for subsequent tree crown segmentation and feature extraction. Four Ground Control Points (GCPs) were established within the arboretum, and their coordinates were precisely measured using a GPS-RTK V30 (HI-Target, Guangzhou, China), as shown in Fig. 1. These GCPs were evenly distributed across the study area and marked with high-contrast targets visible in the drone imagery. They were used to geometrically correct the orthomosaic, ensuring accurate spatial alignment and minimizing distortions. Image processing was performed on a workstation equipped with an Intel (R) Core (TM) i7-8700 processor, an NVIDIA Quadro M4000 graphics card, and 64GB of RAM. This configuration provided the computational power necessary for efficient processing of the high-resolution drone imagery. The Root Mean Square Error (RMSE) of the geometric correction was less than 5 cm, indicating high geometric accuracy of the orthomosaic.

2.5. Crown Segmentation

Individual tree crowns were manually segmented from the orthomosaic using QGIS V3.36.1 software (Open Source Geospatial Foundation). This segmentation step isolated tree crowns from the background and other features in the imagery, enabling accurate extraction of tree-specific characteristics for subsequent analysis. The high spatial resolution of the orthomosaic facilitated the precise delineation of crown boundaries.

Visual interpretation guided the segmentation process, leveraging cues such as crown shape, texture, color, and shadows cast by the trees. Each polygon was carefully drawn to encompass the entire crown while minimizing the inclusion of non-crown pixels. Non-tree features, such as shadows, shrubs, and grasslands, were excluded during the segmentation process.

To ensure consistency, a single expert operator performed all crown delineations. A subset of the segmented crowns (10%) was independently reviewed by another experienced operator to assess the accuracy and consistency of the segmentation. The agreement between the two operators was high, with an average Intersection over Union (IoU) of 0.85. The resulting segmented crown polygons, saved in Shapefile format, served as the basis for extracting textural and spectral information for each tree.

2.6. Field Survey and Data Labeling

A field survey was conducted on May 11, 2024, to collect ground truth data for tree type classification. This survey aimed to provide accurate labels for the segmented tree crowns, essential for training and validating the SVM classification model.

The electronic tree map provided by Chungbuk National University Academic Forest and the Forest Spatial Information Service operated by the Korea Forest Service were used as references to identify potential areas with target tree types initially. The target area was divided into four sections (as shown in Fig. 1), and each was systematically surveyed. On-site analysis of tree crowns was performed to classify each as either coniferous (labeled as 0) or broadleaf (labeled as 1). The identification was based on visual inspection of leaf morphology, branching patterns, and overall crown shape. For each tree, the species, Diameter at Breast Height (DBH), and tree height were also recorded. A total of 424 trees were identified and labeled, with 158 coniferous and 266 broadleaf trees.

These labels were then integrated into the crown segmentation shapefile using QGIS, creating a database linking each segmented crown to its corresponding tree type and other field-measured attributes. This database served as the ground truth for subsequent model training and evaluation.

2.7. Texture Analysis Using GLCM

To capture potentially discriminatory textural information within individual tree crowns, we employed the GLCM method. GLCM is a well-established technique for quantifying spatial relationships between pixel intensities (Haralick et al., 1973). While spectral data (RGB) often dominate vegetation classification, texture analysis can provide complementary information, particularly regarding crown morphology and internal structure (Dian et al., 2015).

For each segmented tree crown, normalized GLCMs were generated for each RGB band using the scikit-image library in Python (van der Walt et al., 2014). A 5 × 5 pixel window, a distance of 1 pixel, and four orientations (0°, 45°, 90°, 135°) were used to capture local textural variations. These parameters were chosen based on preliminary testing and considering computational efficiency and capturing sufficient textural detail, given the image resolution (2.7 cm/px). From each GLCM, eight statistical texture features were calculated: MN, VE, HY, CT, ASM, DY, EY, CN (Table 1). These features represent different aspects of texture, such as overall brightness (MN), local variation (VE, CT, DY), smoothness (HY, ASM), randomness (EY), and linear dependencies (CN). The average value of each texture feature across the four orientations was then calculated, resulting in eight texture features per RGB band, for a total of 24 texture features per tree crown. This approach provides a comprehensive yet computationally manageable representation of crown texture. These GLCM-derived features, along with the mean RGB values, were subsequently used as input for the SVM classifier.

Table 1 Selected GLCM TFs and equations

Variable (Abbreviation)Equation
Mean (MN)MN= i,j=0 N1i×P(i,j)
Variance (VE)VE= i,j=0 N1(iμ)2×P(i,j)
Homogeneity (HY)HY= i,j=0 N1P(i,j)1+ (ij)2
Contrast (CT)CT= i,j=0 N1(ij)2×P(i,j)
Angular Second Moment (ASM)ASM= i,j=0 N1P2(i,j)
Dissimilarity (DY)DY= i,j=0 N1P(i,j)×ij
Entropy (EY)EY= i,j=0 N1P(i,j)×InP(ij)
Correlation (CN)CN= i,j=0 N1(iμ)×(jμ)×P(i,j)σ2


2.8. Tree Classification Using SVM

We employed the SVM algorithm for tree type classification (coniferous vs. broadleaf). SVM is a supervised learning method known for its effectiveness in high-dimensional spaces and with limited training data (Zhou et al., 2016; Foody and Mathur, 2004). SVMs identify an optimal hyperplane that maximizes the margin between classes, making them robust to complex data distributions, including those encountered in remote sensing image analysis (Patel and Chouhan, 2013). Our implementation utilized the scikit-learn library in Python (Pedregosa et al., 2011).

To explore the performance of different decision boundaries, we evaluated four SVM kernel functions: Linear, Polynomial, Radial Basis Function (RBF), and Sigmoid. Each kernel’s performance is influenced by hyperparameters: the cost parameter (C), controlling misclassification penalty, and, for non-linear kernels, the gamma parameter (γ), influencing the decision boundary’s curvature. To optimize each kernel, a grid search with 5-fold cross-validation was performed using GridSearchCV (scikit-learn). C values ranged from 0.1 to 10, and γ values from 0.001 to 1. The optimal hyperparameters were selected based on the highest mean crossvalidated accuracy.

The input feature vector for each tree crown consisted of 11 variables: the mean RGB values and the average of eight GLCMderived texture features (calculated across four orientations, as described in Section 2.7). The labeled dataset (424 trees) was split into training (70%, 297 trees) and testing (30%, 127 trees) sets. Following model training, Permutation Importance was calculated to quantify the relative contribution of each feature to classification accuracy (Breiman, 2001). This involved randomly shuffling each feature’s values 10 times and measuring the average decrease in model accuracy. This analysis informed our understanding of the key drivers of classification performance. The final model selection was based on a combination of classification accuracy (on the test set) and computational efficiency (training time).

2.9. Acquisition and Use of Existing Forest Map Data

The forest map utilized in this study was obtained from the Forest Geospatial Information System (FGIS, https://map.forest.go.kr/ forest), operated by the Korea Forest Service. This map served as a reference dataset for the study area, aiding in field survey planning and providing a general overview of the forest types present within the arboretum. The 1:5,000 scale map was derived from aerial photograph interpretation and field investigations conducted during the 5th National Forest Inventory (2006– 2010) (Korea Forest Service, 2013). It classifies forests into five categories: coniferous (1), deciduous (2), mixed (3), bamboo (4), and uncultivated/non-forest (0). The specific area corresponding to the research site was extracted from the larger forest map. This map, acquired in Shapefile format, served as a spatial reference for ground truth data collection and provided a baseline for understanding the forest composition within the study area. It is important to note that this map was primarily used for general reference and not for direct validation of our classification results due to differences in spatial resolution, data acquisition time, and classification methods.

2.10. Hyperparameter Selection

SVM classification involves selecting appropriate hyperparameters to optimize model performance. These hyperparameters include the cost parameter (C) and the gamma parameter (γ), which control the penalty for misclassification and the decision boundary’s curvature, respectively. We employed a 5-fold cross-validation technique using GridSearchCV from the scikit-learn library to identify the optimal hyperparameters for each SVM kernel. The selection process aimed to maximize the model’s performance by exploring various combinations of C and γ values, ultimately choosing the combination that yielded the highest cross-validated accuracy.

The grid search revealed that different kernels achieved their best performance with different C and γ values. The linear kernel achieved its highest accuracy with C=1, while the polynomial kernel performed optimally with C=1 and γ=0.1. The RBF kernel achieved its best performance with C=10and γ=0.01, and the sigmoid kernel with C=0.1 and γ=0.001. These optimal hyperparameter combinations were then used for the subsequent classification and evaluation steps.

2.11. Accuracy Evaluation

The performance of the SVM models was rigorously evaluated using a held-out test set (30% of the data) to provide an unbiased assessment of their generalization capability. We used several metrics to assess the model performance, including overall accuracy, precision, recall, and the F1-score (Table 2). Overall accuracy represents the proportion of correctly classified trees across both coniferous and broadleaf classes. Precision quantifies the accuracy of positive predictions, while recall measures the completeness of positive predictions. The F1-score, the harmonic mean of precision and recall, provides a balanced measure, particularly relevant given the slight class imbalance in our dataset.

Table 2 Performance metrics and calculation formulas

IndexFormula
AccuracyTP+TNTP+FN+FP+TN
PrecisionTPTP+FP
RecallTPTP+FN
F1-score2×Precision×RecallPrecision+Recall


A confusion matrix was generated to analyze classification errors further, visualizing the distribution of true positives, true negatives, false positives, and false negatives. This allowed for a detailed examination of misclassification patterns between coniferous and broadleaf trees. In addition to these test-set metrics, 5-fold cross-validation was performed during model training to ensure robustness and mitigate overfitting to the training data. The model exhibiting the highest F1-score on the test set, after hyperparameter optimization using cross-validation, was selected as the final classification model. This selection prioritized a balance between precision and recall, which is crucial for reliable application in urban forest management.

3.1. Performance of SVM Kernels and Feature Combinations

We evaluated four SVM kernels (Linear, RBF, Polynomial, and Sigmoid) with various feature combinations to identify the optimal model for classifying coniferous and broadleaf trees, focusing on both accuracy and computational efficiency (Table 3). The RBF kernel, after hyperparameter optimization (C=10, γ=0.01), achieved the highest overall accuracy (99%) and F1- score (0.99), indicating excellent performance in separating the two tree types. The Linear kernel also performed exceptionally well (99% accuracy, 0.99 F1-score), albeit with a considerably longer training time (96.8s vs. 5.0s for RBF). The Polynomial and Sigmoid kernels exhibited slightly lower performance.

Table 3 Performancecomparison of SVM kernels for tree type classification using RGB and GLCM

KernelCostGammaCross-validated Accuracy (SD)Training Time (s)AccuracyPrecisionRecallF1-score
RBF100.010.97 (0.02)5.00.990.981.000.99
Linear10.010.98 (0.01)96.80.990.981.000.99
Polynomial0.01100.95 (0.03)3.40.981.000.960.98
Sigmoid1000.010.95 (0.02)2.40.950.960.960.96


Further analysis investigated the contribution of individual RGB bands and GLCM texture features (Variance and Homogeneity) (Tables 4 and 5). Consistently, combining all three RGB bands (Red, Green, Blue) yielded the highest accuracies across both Linear and RBF kernels. Adding GLCM features did not improve, and in some cases slightly decreased, classification accuracy, while increasing training time. Normalized difference indices performed poorly. This demonstrates that, for this specific classification task and dataset, spectral information from the RGB bands was paramount, and the added complexity of texture features was not beneficial. This finding contrasts with some studies where texture features improved vegetation classification but aligns with others emphasizing the dominance of spectral information (Fassnacht et al., 2016). The limited contribution of GLCM features here might be due to the relatively homogeneous forest structure or the spatial resolution of the RGB imagery, which may not have been optimal for capturing the relevant textural variations at the tree crown level.

Table 4 Performance of the linear SVM kernel for tree type classification using different combinations of RGB color bands and GLCM TFs

Feature CombinationCostGammaAccuracyTraining Time (s)
B10.010.94221.2
B + R0.10.010.899684.1
B + G1000.010.963374.7
B + R + G1000.010.983452.7
RGB + V0.010.010.979438.0
RGB + H1000.010.979310.3
RGB + V + H0.010.010.976192.2
(B – R) / (B + R)0.10.010.9421819.9
(B – G) / (B + G)1000.010.95929.4
(R – G) / (R + G)0.010.010.9421887.2


Table 5 Performance of the RBF SVM kernel for tree type classification using different combinations of RGB color bandsand GLCM TFs

Feature CombinationCostGammaAccuracyTraining Time (s)
B0.110.89987.4
B + R1.0100.90514.4
B + G1010.9737.4
B + R + G100.010.9867.0
RGB + V100.0010.9835.0
RGB + H1000.0010.9866.7
RGB + V + H1000.0010.9867.3
(B – R) / (B + R)100.010.94926.1
(B – G) / (B + G)0.10.010.95929.4
(R – G) / (R + G)10.010.94930.2


3.2. Confusion Matrix Analysis

Fig. 3 presents the confusion matrix for the best-performing model (RBF kernel, C=10, γ=0.01). The matrix reveals that the model achieved high classification accuracy for both coniferous (Precision=0.98, Recall=0.95) and broadleaf (Precision=0.96, Recall=0.99) trees. However, some misclassifications did occur. Specifically, 3 broadleaf trees were misclassified as coniferous (false negative), and 1 coniferous tree was misclassified as broadleaf (false positive). These misclassifications may be attributed to several factors, including overlapping tree crowns, which can create mixed pixels containing spectral information from both tree types. Additionally, variations in illumination and shadow within the canopy can affect the spectral signatures of individual trees. Furthermore, the limited spectral resolution of RGB imagery may hinder the differentiation of species with subtle spectral differences, especially if those differences manifest in wavelengths outside the visible spectrum. Finally, edge effects at the boundaries of tree crowns could also contribute to misclassification, as these areas may contain a mix of tree and background pixels.

Fig. 3. Confusion matrix for RBF SVM classification accuracy evaluation (Predicted Class / True Class).

3.3. Feature Importance Analysis

We assessed the relative importance of each feature using the Permutation Importance method (Breiman, 2001) to understand the drivers of classification accuracy. Fig. 4 demonstrates the dominance of spectral information (RGB bands) over textural features (GLCM) in distinguishing between coniferous and broadleaf trees in our study area. The blue band exhibited the highest importance (0.38), followed by red (0.23) and green (0.20), indicating that color differences were the primary discriminatory factors. This prominence of the blue band may reflect species-specific variations in pigment concentrations or leaf structural properties that influence reflectance in this portion of the spectrum. However, further physiological investigation would be needed to confirm this.

Fig. 4. Relative feature importance in SVM model using permutation importance.

Among the GLCM features, Variance (0.03) showed the highest, albeit limited, importance. All other GLCM features (Homogeneity, Dissimilarity, Correlation, Mean, ASM, Contrast, and Entropy) exhibited negligible importance scores (≤ 0.02). This reinforces the finding that, for this specific classification task and dataset, textural information provided minimal additional value beyond the RGB data. The limited contribution of GLCM features could be attributed to several factors, including the relatively homogeneous structure of the two tree types at the scale captured by the 2.7 cm/px resolution imagery, the use of RGB imagery, which lacks the spectral detail of multispectral or hyperspectral data, or the chosen GLCM parameters (5 × 5 window, 1-pixel distance), which might not have been optimal for capturing relevant textural variations. This finding, while contrasting with some studies that found texture beneficial (Dian et al., 2015), highlights the context-dependency of feature importance in remote sensing classification.

3.4. Comparison with Existing Forest Maps

We compared our drone-derived classification results with the existing forest map from the Korea Forest Service’s Forest Geospatial Information System (FGIS) (Fig. 5) to assess their alignment and highlight the potential advantages of our dronebased approach. The FGIS map, produced at a 1:5,000 scale using aerial photograph interpretation and field data from the 5th National Forest Inventory (2006-2010), provides a broad categorization of forest types. This represents a fundamentally different level of detail and temporal currency compared to our drone-based approach.

Fig. 5. Comparison of drone and FGIS-based forest type maps.

The most striking difference is the level of detail. The FGIS map predominantly classifies the arboretum as a mixed forest, reflecting its coarse resolution and inability to resolve individual tree types. In contrast, our drone-derived map, with its 2.7 cm/px resolution, reveals a much more heterogeneous and nuanced distribution of coniferous and broadleaf trees. Areas designated as “mixed” by the FGIS were often found to be dominated by either coniferous or broadleaf trees, highlighting the limitations of the FGIS map for precise urban forest management. This discrepancy is not simply a matter of classification accuracy, but a fundamental difference in the scale of observation. The FGIS provides a regional overview, while our method provides a treelevel inventory. Furthermore, the FGIS map’s age (data from 2006–2010) means it cannot reflect changes due to management practices, natural disturbances, growth, or mortality that have occurred in the intervening 15+ years. This temporal mismatch underscores the need for regularly updated, high-resolution mapping for effective urban forest management.

3.5. Processing Time and Efficiency

A key advantage of the proposed methodology is its computational efficiency. The entire workflow, from drone image acquisition to final classification using the optimized RBF kernel SVM model (C=10, γ=0.01), required approximately 12 hours (Table 6). This included 30 minutes for image acquisition, 6 hours for orthomosaic generation, 4 hours for manual crown segmentation, and 2 hours for feature extraction and model training/validation. While processing time is dependent on factors such as study area size, hardware, and software settings, this 12-hour timeframe represents a significant improvement over traditional field-based methods, which could take weeks or months to cover a comparable area with the same level of detail. Furthermore, although a direct comparison with other machine learning methods was outside the scope of this study, the computational demands of our SVM approach are considerably lower than those typically associated with deep learning models applied to high-resolution imagery (LeCun et al., 2015; Ma et al., 2024). This makes our method a practical and accessible solution, particularly for resourceconstrained urban forest management agencies. The most timeconsuming steps were orthomosaic generation and crown segmentation. Future work could explore automated or semiautomated methods for these tasks to further improve efficiency.

Table 6 Processing time for key steps in the workflow

StepTime (Hours)
Image Acquisition0.5
Orthomosaic Generation6
Crown Segmentation4
Feature Extraction and Model Training2
Total12


3.6. Usability of Drone Images in Urban Forest Management

This study demonstrated the significant potential of droneacquired RGB imagery for urban forest management. The drone imagery provided high-resolution data with a GSD of approximately 2.7 cm/px, efficiently acquired in terms of both time and cost (Fig. 6a). This level of detail facilitated the accurate classification of 424 individual trees as either broadleaf (n=266) or coniferous (n=158), as shown in Fig. 6b). The entire process, including image acquisition, processing, and classification, took only 12 hours, highlighting its efficiency compared to traditional methods.

Fig. 6. Comparison of Tree Type Classification: Drone Imagery vs. Forest Service Map. (a) Drone RGB orthomosaic of the study area at Chungbuk National University Arboretum. (b) Tree type classification map derived from drone imagery and field surveys (this study). (c) Tree type classification map from the Korea Forest Service Forest Spatial Information Service (2006–2010 data). (d) Proportional Distribution of Broadleaf and Coniferous Trees within Mixed Forest Stands (FGIS-based).

Comparison with the Korea Forest Service’s existing forest map (Fig. 6c) highlighted the limitations of traditional methods, particularly their inability to provide detailed and up-to-date species-level classification in mixed forest stands. While the forest map showed general agreement in the location of coniferous and broadleaf stands, it was based on data from the 5th National Forest Inventory (2006-2010) and, therefore, did not accurately reflect the current forest composition, being over 15 years out of date. This discrepancy underscores a key finding of this study: drone imagery offers a significant advantage by providing current, accurate, and detailed data for urban forest monitoring (Fig. 6b). This capability to capture the dynamic nature of urban forests, which can change rapidly due to management practices, natural disturbances, or development, demonstrates the potential of drone technology to overcome the limitations of outdated forest maps. These results support the growing body of evidence that advocates for the use of drones as a valuable tool in urban forestry, providing objective and precise data essential for effective management and planning.

3.7. Discussion of GLCM Feature Performance

While our initial hypothesis suggested that GLCM-derived TFs would enhance classification accuracy, the results indicate that their contribution was limited in this specific context. The permutation importance analysis (Fig. 4) and the model performance comparison (Tables 4 and 5) both suggest that the RGB color bands provided sufficient information for accurate classification, and the addition of GLCM features did not yield significant improvements. This finding is consistent with some previous studies that have found spectral information to be more important than texture for classifying certain tree species or forest types (Fassnacht et al., 2016). However, it contrasts with other studies where TFs have been shown to improve classification accuracy (Dian et al., 2015).

Several factors may explain the limited impact of GLCM features in our study. First, the relatively homogeneous structure of the urban forest within the arboretum, dominated by only two main types, may have resulted in limited textural variations detectable by GLCM at the spatial resolution of the imagery (2.7 cm/px). Second, the use of RGB imagery, which has a coarser spectral resolution compared to multispectral or hyperspectral data, might have restricted the ability of GLCM to capture subtle textural differences relevant to species differentiation. Third, the specific parameters used for GLCM calculation, such as the window size (5 × 5 pixels) and distance (1 pixel), might not have been optimal for capturing the relevant textural variations in this particular dataset. While a 5 × 5 window is often used as a starting point in texture analysis, different window sizes can capture different scales of texture, and the optimal window size depends on the spatial resolution of the imagery and the size of the objects being analyzed. Similarly, the choice of distance influences the scale of texture being captured, and a distance of 1 pixel might be too small to capture meaningful textural variations in tree crowns, especially at a relatively fine spatial resolution. It is also worth noting that averaging the GLCM features across four orientations might have obscured some direction-dependent textural differences that could be relevant for classification. Exploring different parameter combinations, including different window sizes, distances, and orientations, could potentially enhance the contribution of GLCM features in future studies.

3.8. Implications for Urban Forest Management

The high accuracy and efficiency of the drone-based RGB imagery and SVM approach demonstrated in this study have significant implications for urban forest management. The ability to rapidly and accurately map tree types, even with readily available RGB data, provides a valuable tool for: A) Inventory and Monitoring: Creating detailed and up-to-date inventories of urban forest resources, tracking changes in forest composition over time, and monitoring the effectiveness of management interventions. B) Species-Specific Management: Developing and implementing species-specific management plans, such as selecting appropriate species for planting based on site conditions and management objectives, and managing for desired species diversity. C) Pest and Disease Management: Potentially identifying early signs of stress or disease based on changes in spectral characteristics, although this would require further research using multispectral or hyperspectral data. D) Ecosystem Service Assessment: Providing accurate data for quantifying and mapping ecosystem services, such as air purification, carbon sequestration, and temperature regulation, which can inform urban planning and policy decisions. E) Risk Assessment: Identifying trees that may pose risks to infrastructure or public safety, particularly in combination with LiDAR data for structural assessments (Ma et al., 2024).

Our findings highlight the limitations of relying solely on existing forest maps, such as those provided by the Korea Forest Service, which may be outdated or lack the necessary spatial detail for effective urban forest management. The 1:5000 scale FGIS map, while useful for regional planning, does not capture the fine-scale heterogeneity of urban forests and may not reflect recent changes in forest composition. The drone-based approach presented here offers a more accurate and timelier alternative for mapping and monitoring urban forests, enabling managers to make more informed decisions and optimize the benefits provided by these valuable green spaces.

3.9. Limitations and Future Research

Despite the promising results, this study has several limitations. First, the study was conducted in a relatively small and homogeneous urban forest with only two dominant tree types (coniferous and broadleaf). Further research is needed to evaluate the generalizability of the method to other urban forests with greater species diversity and structural complexity. Second, the study relied on RGB imagery, which has limited spectral resolution compared to multispectral or hyperspectral data. While RGB data proved sufficient for this specific classification task, future studies should explore the potential of multispectral or hyperspectral sensors for more detailed species identification and health assessment (Jeong et al., 2024). Third, the study was conducted during a single season (May). Seasonal variations in tree phenology can affect spectral and textural characteristics, and further research is needed to assess the performance of the method across different seasons (Lee et al., 2022). Fourth, although we achieved high classification accuracy, the manual segmentation of tree crowns remains a time-consuming step. Future research should investigate automated crown delineation methods to further improve the efficiency of the workflow.

Future research should also focus on addressing the limitations of GLCM texture features identified in this study. Investigating different GLCM parameters, such as window size, distance, and orientation, as well as exploring other texture descriptors, could potentially enhance their contribution to classification accuracy. Additionally, integrating data from multiple sensors, such as LiDAR and hyperspectral imagery (Zhang and Qiu, 2012), could provide a more comprehensive characterization of urban forests and improve the accuracy of species identification and health assessment. Finally, while our study demonstrated the efficiency of the SVM approach compared to traditional methods and potentially some deep learning applications, further research is needed to directly compare the performance and computational costs of SVM with other machine learning and deep learning algorithms for urban forest mapping using drone-based RGB imagery (Feng et al., 2015; Im et al., 2020).

3.10. Comparison with Other Machine Learning Methods

While a direct comparison with other machine learning methods was not the primary focus of this study, it is important to consider the strengths and weaknesses of SVMs in relation to other popular techniques, such as Random Forest and CNNs, based on existing literature (Weinstein et al., 2020). Random Forests are known for their robustness, ability to handle high-dimensional data, and relatively low computational cost (Belgiu and Drăguţ, 2016). However, they may not always achieve the same level of accuracy as SVMs, particularly when dealing with complex datasets and subtle differences between classes (Foody and Mathur, 2004). CNNs, a type of deep learning algorithm, have demonstrated impressive results in image classification tasks, including tree species identification (Grabska et al., 2020). They can automatically learn hierarchical features from raw data, potentially eliminating the need for manual feature engineering. However, CNNs typically require large amounts of training data and significant computational resources, which can be a limitation for some applications (LeCun et al., 2015).

In contrast, SVMs, especially with linear kernels, can be more computationally efficient and perform well even with limited training data, as demonstrated in our study. The choice of the most suitable method depends on various factors, including the specific research question, the characteristics of the dataset, the available computational resources, and the desired level of accuracy. Our study suggests that for classifying coniferous and broadleaf trees in urban forests using drone-based RGB imagery, SVMs offer a practical and effective solution, achieving high accuracy with relatively low computational demands.

This study successfully developed and evaluated an efficient and accurate method for classifying coniferous and broadleaf trees in an urban forest using drone-acquired RGB imagery and an SVM classifier. The RBF kernel, optimized with a cost parameter of 10 and a gamma of 0.01, achieved the highest overall accuracy (99%) and F1-score (0.99), demonstrating the effectiveness of this approach. Notably, the entire process, from image acquisition to classification, took only about 12 hours, highlighting its practicality for real-world applications.

Analysis of feature importance revealed that RGB color bands, particularly the blue band, were the most significant discriminators between the two tree types. While initially hypothesized to enhance classification, GLCM texture features provided limited additional information in this specific context. This may be due to the relatively homogeneous forest structure, the coarser spectral resolution of RGB imagery compared to other sensors, and the chosen GLCM parameters.

Comparison with the Korea Forest Service’s 1:5000 scale forest map, derived from older data (2006-2010), underscored the limitations of traditional methods for capturing the fine-scale heterogeneity and dynamic nature of urban forests. Our drone-based approach provides a more detailed and current assessment, essential for effective management.

This research offers a practical and cost-effective approach to urban forest mapping, with broader implications for South Korea and other rapidly urbanizing regions. By providing timely and accurate data, this method can contribute to more informed decision-making, improved management practices, and ultimately, the creation of healthier and more resilient urban green spaces. Future research should explore the potential of multispectral or hyperspectral sensors to improve species-level identification and assess tree health. Deep learning models and the incorporation of phenological stages could also be investigated.

No potential conflict of interest relevant to this article was reported.

  1. Belgiu, M., and Drăguţ, L., 2016. Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, 114, 24-31. https://doi.org/10.1016/j.isprsjprs.2016.01.011
  2. Breiman, L., 2001. Random forests. Machine Learning, 45(1), 5-32.
  3. Chehreh, B., Moutinho, A., and Viegas, C., 2023. Latest trends on tree classification and segmentation using UAV data-A review of agroforestry applications. Remote Sensing, 15(9), 2263. https:// doi.org/10.3390/rs15092263.
  4. Dian, Y., Li, Z., and Pang, Y., 2015. Spectral and texture features combined for forest tree species classification with airborne hyperspectral imagery. Journal of the Indian Society of Remote Sensing, 43, 101-107. https://doi.org/10.1007/s12524-014-0392-6
  5. Dwyer, J. F., McPherson, E. G., Schroeder, H. W., and Rowntree, R. A., 1992. Assessing the benefits and costs of the urban forest. Arboriculture & Urban Forestry, 18(5), 227-234. https://doi.org/10.48044/jauf.1992.045
  6. Fassnacht, F. E., Latifi, H., Stereńczak, K., Modzelewska, A., Lefsky, M., Waser, L. T., Straub, C., and Ghosh, A., 2016. Review of studies on tree species classification from remotely sensed data. Remote Sensing of Environment, 186, 64-87. https://doi.org/10.1016/j.rse.2016.08.013
  7. Feng, Q., Liu, J., and Gong, J., 2015. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sensing, 7(1), 1074-1094. https://doi.org/10.3390/rs70101074
  8. Foody, G. M., and Mathur, A., 2004. A relative evaluation of multiclass image classification by support vector machines. IEEE Transactions on Geoscience and Remote Sensing, 42(6), 1335-1343. https://doi.org/10.1109/TGRS.2004.827257
  9. Grabska, E., Paluba, D., Fraczyk, P., and Twardowski, M., 2020. A review on deep learning methods for urban trees and road detection in remote sensing images. Remote Sensing, 12(9), 1484. https://doi.org/10.3390/rs12091484
  10. Gu, J., Grybas, H., and Congalton, R. G., 2020. Individual tree crown delineation from UAS imagery based on region growing and growth space considerations. Remote Sensing, 12(15), 2363. https://doi.org/10.3390/rs12152363
  11. Guo, Q., Zhang, J., Guo, S., Ye, Z., Deng, H., Hou, X., and Zhang, H., 2022. Urban tree classification based on object-oriented approach and random forest algorithm using unmanned aerial vehicle (UAV) multispectral imagery. Remote Sensing, 14(16), 3885. https://doi.org/10.1016/j.ufug.2020.126958
  12. Haralick, R. M., Shanmugam, K., and Dinstein, I. H., 1973. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6), 610-621. https://doi.org/10.1109/TSMC.1973.4309314
  13. Im, J., Rhee, J., and Jensen, J. R., 2020. Mapping urban tree cover using deep learning and multi-source remote sensing data. Remote Sensing, 12(15), 2413. https://doi.org/10.3390/rs12152413
  14. Jeong, K. S., Go, S. H., Lee, K. K., and Park, J. H., 2024. Analyzing soybean growth patterns in open-field smart agriculture under different irrigation and cultivation methods using drone-based vegetation indices. Korean Journal of Remote Sensing, 40(1), 93-101. https://doi.org/10.7780/kjrs.2024.40.1.5
  15. Korea Forest Service, 2013. Forest spatial information service. Available online: https://map.forest.go.kr/forest (accessed on May 8, 2024)
  16. LeCun, Y., Bengio, Y., and Hinton, G., 2015. Deep learning. Nature, 521(7553), 436-444. https://doi.org/10.1038/nature14539
  17. Lee, H. J., Go, S. H., and Park, J. H., 2022. Assessment of lodged damage rate of soybean using support vector classifier model combined with drone based RGB vegetation indices. Korean Journal of Remote Sensing, 38(6-1), 1489-1503. https://doi.org/10.7780/kjrs.2022.38.6.1.37
  18. Ma, B., Hauer, R. J., Östberg, J., Koeser, A. K., Wei, H., and Xu, C., 2021. A global basis of urban tree inventories: What comes first the inventory or the program. Urban Forestry & Urban Greening, 60, 127087. https://doi.org/10.1016/j.ufug.2021.127087
  19. Ma, Y., Zhao, Y., Im, J., Zhao, Y., and Zhen, Z., 2024. A deep-learningbased tree species classification for natural secondary forests using unmanned aerial vehicle hyperspectral images and LiDAR. Ecological Indicators, 159, 111608. https://doi.org/10.1016/j.ecolind.2024.111608
  20. Nowak, D. J., Crane, D. E., and Stevens, J. C., 2006. Air pollution removal by urban trees and shrubs in the United States. Urban Forestry &Urban Greening, 4(3-4), 115-123. https://doi.org/10.1016/j.ufug.2006.01.007
  21. Nowak, D. J., and Dwyer, J. F., 2007. Understanding the benefits and costs of urban forest ecosystems. In: Kuser, J. E., (ed.), Urban and community forestry in the northeast, Springer, pp. 25-46. https://doi.org/10.1007/978-1-4020-4289-8_2
  22. Park Green Space Act, 2020. Act on Urban Parks and Green AREAS. Available online: https://elaw.klri.re.kr/eng_service/lawViewContent.do?hseq=6905 (accessed on Jan. 3, 2025)
  23. Patle, A., and Chouhan, D. S., 2013. SVM kernel functions for classification. In Proceedings of the 2013 International Conference on Advances in Technology and Engineering (ICATE), Mumbai, India, Jan. 23-25, pp. 1-9. https://doi.org/10.1109/ICAdTE.2013.6524743
  24. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., and Duchesnay, É., 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830. http://jmlr.org/papers/v12/pedregosa11a.html
  25. Shivaprakash, K. N., Swami, N., Mysorekar, S., Arora, R., Gangadharan, A., Vohra, K., Jadeyegowda, M., and Kiesecker, J. M., 2022. Potential for artificial intelligence (AI) and machine learning (ML) applications in biodiversity conservation, managing forests, and related services in India. Sustainability, 14(12), 7154. https://doi.org/10.3390/su14127154
  26. van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., and Yarkony, J.; the scikit-image contributors, 2014. scikit-image: image processing in Python. PeerJ, 2, e453. https://doi.org/10.7717/peerj.453
  27. Velasquez-Camacho, L., Cardil, A., Mohan, M., Etxegarai, M., Anzaldi, G., and de-Miguel, S., 2021. Remotely sensed tree characterization in urban areas: A review. Remote Sensing, 13(23), 4889. https://doi.org/10.3390/rs13234889
  28. Wang, X., Wang, Y., Zhou, C., Yin, L., and Feng, X., 2021. Urban forest monitoring based on multiple features at the single tree scale by UAV. Urban Forestry & Urban Greening, 58, 126958. https://doi.org/10.1016/j.ufug.2020.126958
  29. Weinstein, B. G., Marconi, S., Bohlman, S., Zare, A., and White, E., 2020. Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks. Remote Sensing, 12(8), 1309. https://doi.org/10.3390/rs12081309
  30. Zhang, C., and Qiu, F., 2012. Mapping individual tree species in an urban forest using airborne Lidar data and hyperspectral imagery. Photogrammetric Engineering &Remote Sensing, 78(10), 1079-1087. https://doi.org/10.14358/PERS.78.10.1079
  31. Zhou, J., Qin, J., Gao, K., and Leng, H., 2016. SVM-based soft classification of urban tree species using very high-spatial resolution remote-sensing imagery. International Journal of Remote Sensing, 37(11), 2541-2559. https://doi.org/10.1080/01431161.2016.1178867

Research Article

Korean J. Remote Sens. 2025; 41(1): 209-223

Published online February 28, 2025 https://doi.org/10.7780/kjrs.2025.41.1.17

Copyright © Korean Society of Remote Sensing.

High-Accuracy Tree Type Classification in Urban Forests Using Drone-Based RGB Imagery and Optimized SVM

Won-Ki Jo1 , Jong-Hwa Park2*

1Master Student, Department of Agricultural and Rural Engineering, Chungbuk National University, Cheongju, Republic of Korea
2Professor, Department of Agricultural and Rural Engineering, Chungbuk National University, Cheongju, Republic of Korea

Correspondence to:Jong-Hwa Park
E-mail: jhpak7@cbnu.ac.kr

Received: January 14, 2025; Revised: February 8, 2025; Accepted: February 12, 2025

This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Accurate and efficient tree type classification in urban forests is crucial for effective management, informed policy decisions, and enhancing urban resilience, particularly with increasing urbanization and climate change. This study developed and evaluated a practical methodology for classifying coniferous and broadleaf trees in the Chungbuk National University Arboretum, South Korea. The study utilized drone-acquired, high-resolution RGB imagery and a Support Vector Machine (SVM) classifier. The workflow encompassed drone image acquisition, concurrent ground truth data collection, image preprocessing, feature extraction (including RGB color bands and Gray-Level Co-occurrence Matrix [GLCM] texture features [TFs]), and SVM model training, optimization, and evaluation. Different SVM kernels (Linear, RBF, Polynomial, Sigmoid) and feature combinations were investigated to optimize model performance, with a specific focus on processing time for practical application. Results indicated that RGB color bands were the primary drivers of accurate classification, while most GLCM TFs provided minimal additional benefit in this specific context. The RBF kernel, with optimized hyperparameters (C=10, γ=0.01), achieved the highest overall accuracy (99%) and F1-score (0.99), while the Linear kernel provided similar accuracy but with a longer processing time. Notably, the drone-based classification significantly outperformed the outdated Korea Forest Service forest map in representing the current forest composition, highlighting the limitations of traditional mapping methods for dynamic urban environments. This research contributes a cost-effective and accurate method for urban forest assessment, demonstrating the value of drone technology and readily available RGB imagery. The entire process, from image acquisition to classification, was completed in approximately 12 hours, showcasing its efficiency. Although this study focused on only two tree types in a single season, the developed methodology shows potential for broader application in classifying a wider range of species and informing management practices across different seasons by considering the phenological stages of trees. The proposed approach provides urban planners and forest managers with a valuable tool for enhancing ecosystem services and improving the quality of life in urban areas. This study underscores the potential of drone technology to revolutionize urban forest monitoring and management practices, paving the way for more sustainable and informed decisionmaking, particularly in rapidly urbanizing regions.

Keywords: Urban forestry, Drone, Remote sensing, Support vector machine, Tree type classification

1. Introduction

Urban forests are vital components of sustainable urban ecosystems, delivering essential services like climate regulation, air purification, and enhanced human well-being (Dwyer et al., 1992; Nowak et al., 2006). These benefits are increasingly crucial as urbanization intensifies and the impacts of climate change become more pronounced (Ma et al., 2021). Effective management of urban forests depends on accurate and timely information regarding tree species composition and distribution. Such data are crucial for informed policy formulation, ecological modeling, urban planning, and assessing carbon sequestration potential in line with carbon neutrality goals (Nowak and Dwyer, 2007; Park Green Space Act, 2020). However, acquiring detailed and up-to-date information on urban tree species remains a challenge due to the dynamic nature of urban environments and limitations in traditional assessment methods (Velasquez-Camacho et al., 2021).

In South Korea, urban forests are formally recognized for their ecological and social importance under the Act on the Creation and Management of Urban Forests, Etc., including diverse green spaces like parks, roadside plantings, and trees within residential areas (Korea Forest Service, 2013). These urban forests play a significant role in enhancing the quality of life for urban residents, contributing to environmental sustainability and human well-being (Park Green Space Act, 2020). While national and local initiatives promote urban green space expansion, a research gap exists in developing objective, systematic, and efficient techniques for their management and monitoring, particularly for classifying tree types at a scale relevant for practical management decisions (Gu et al., 2020). This gap highlights the need for advanced methods that can accurately assess and monitor urban forest resources to support effective decision-making and sustainable urban development.

Traditional methods for obtaining tree species information, such as field surveys and aerial photograph interpretation, often employed in national forest inventories, are labor-intensive, time-consuming, and may lack the spatial and temporal resolution needed for detailed assessments of dynamic urban environments (Velasquez-Camacho et al., 2021). Moreover, the complex mosaic of land use and diverse vegetation types in urban areas further complicates accurate species identification using these conventional approaches (Wang et al., 2021). For instance, traditional forest maps may not fully capture the heterogeneity of tree species within small urban parks or along roadsides, and they often lack information on individual tree health and condition. This limitation hinders effective urban forest management, which requires fine-grained information to address specific needs and challenges within heterogeneous urban landscapes.

The emergence of Unmanned Aerial Vehicles (UAVs), or drones, as a high-resolution remote sensing platform, combined with advancements in Artificial Intelligence (AI), offers a transformative opportunity for efficient and cost-effective tree species classification in urban environments (Chehreh et al., 2023; Guo et al., 2022; Lee et al., 2022). UAVs provide the flexibility to acquire high-resolution data even in complex urban settings, overcoming the spatial and temporal limitations of traditional methods (Jeong et al., 2024; Shivaprakash et al., 2022). Drone-based RGB imagery, in particular, presents a highly promising avenue due to its affordability, accessibility, and ability to capture fine-scale details of tree crowns. These details can be used to extract textural and morphological features vital for species differentiation (Fassnacht et al., 2016; Gu et al., 2020).

This study leverages the potential of drone-based RGB imagery and machine learning to address the research gap in urban tree species classification. We propose a framework utilizing an SVM classifier and drone-acquired RGB imagery to classify two common urban tree types in South Korea - coniferous and broadleaf trees - within a confined urban area in Cheongju-si. Cheongju-si is a rapidly urbanizing region with approximately 10,637 ha of urban forest, making it a relevant case study for exploring innovative urban forest monitoring techniques (Park Green Space Act, 2020). The study investigates the influence of texture features, derived from Gray-Level Co-occurrence Matrix (GLCM) analysis, on classification accuracy. It also evaluates the performance of different SVM kernel types (linear, polynomial, radial basis function [RBF], and sigmoid) to identify the most effective configuration for this specific application.

Our research addresses several key questions: A) How effective is drone-based RGB imagery, combined with SVM classification, for accurately classifying coniferous and broadleaf trees in a complex urban environment? B) Do GLCM-derived texture features significantly enhance classification accuracy in this context, or are RGB color bands alone sufficient for reliable classification? C) What is the optimal SVM kernel type for this classification task, considering both accuracy and computational efficiency? D) How do the results of our drone-based classification compare with existing forest maps, and what advantages does this approach offer for urban forest management?

By addressing these questions, this study aims to contribute to the development of practical and accessible tools for urban forest monitoring and management in South Korea and other rapidly urbanizing regions.

2. Materials and Methods

2.1. Study Area

This study was conducted at the Chungbuk National University Arboretum, a 2.5-hectare site located in Cheongju-si, Chungcheongbuk-do, Republic of Korea (36°37′44″N, 127°27′13″E) (Fig. 1). The arboretum serves as a research and educational site for forest science and is representative of the expanding urban forests within the region. The arboretum’s forest is mixed, with dominant broad-leaved species including Quercus acutissima and Cornus controversa, and coniferous such as Pinus densiflora and Pinus strobus. A 1,400 m wooden deck walking trail is integrated into the arboretum, designed to minimize tree removal and thus maintain continuous canopy cover. This feature enhances the site’s suitability for studying urban forest structure using drone imagery. The diverse tree species composition, coupled with the arboretum’s location within a rapidly urbanizing region, makes it an ideal location for investigating the effectiveness of dronebased remote sensing for urban forest management.

Figure 1. Location and characteristics of the study area, the Chungbuk National University Arboretum in Cheongju-si, South Korea.

The arboretum was selected as the study site to address the limited research on tree species composition within these urban forests. It provides a controlled yet representative setting for applying novel remote sensing techniques. Of particular note is the arboretum’s status as the only known Korean habitat for the native Cladrastis platycarpa (Maxim.) Makino, highlighting its value for conservation.

2.2. Research Process and Methods

This study aimed to develop an accurate and efficient method for classifying tree types (coniferous and broadleaf) in urban forests using drone-acquired RGB imagery. The research process involved a systematic workflow that integrated drone image acquisition, in-situ data collection, image processing, feature extraction, and model development (Fig. 2).

Figure 2. Workflow for tree type classification using drone RGB imagery and SVM classifier.

Data Collection: Drone image acquisition and field surveys were conducted concurrently to obtain high-resolution RGB imagery and identify corresponding coniferous and broadleaf tree crowns within the study area.

Image Processing and Segmentation: Image processing involved generating an orthomosaic from the drone imagery to create a geometrically accurate representation of the study area. Individual tree crowns were then segmented from the orthomosaic, and these segmented crowns were labeled as either coniferous or broadleaf based on the field survey data.

Feature Extraction: Feature extraction focused on calculating GLCM-TFs from the segmented RGB imagery. Eight statistical texture features—Mean (MN), Variance (VE), Homogeneity (HY), Contrast (CT), Angular Second Moment (ASM), Dissimilarity (DY), Entropy (EY), and Correlation (CN)—were extracted from each crown to capture subtle differences in morphology and color patterns (Haralick et al., 1973).

SVM Classification and Model Development: An SVM classifier was trained using the RGB values and GLCM-TFs as input data. Model development included hyperparameter tuning via a grid search to optimize model parameters, 5-fold crossvalidation to ensure robustness and minimize overfitting, and performance analysis using a confusion matrix to evaluate classification accuracy. The aim was to identify the most effective combination of features and SVM parameters for accurate treetype classification.

Key Features of Methodology: This study contributes a practical methodology for urban forest assessment using readily available drone-acquired RGB imagery. The focus on readily available RGB data and a streamlined workflow using an SVM classifier, instead of a computationally intensive Convolutional Neural Network (CNN), is a key feature of this research.

2.3. Drone Image Acquisition

High-resolution RGB imagery was acquired for this study using a DJI Matrice 300 RTK rotary-wing drone equipped with a Zenmuse H20T RGB sensor (DJI, Shenzhen, China). This imagery served as the primary data source for tree type classification, leveraging the ability of drone-based sensors to capture detailed information about tree crown morphology and color.

Data acquisition was conducted on May 10, 2024, at 10:00 AM under favorable weather conditions, with clear skies, minimal wind (less than 2 m/s), a temperature of 18.5°C, and 47% humidity. The flight followed a double-grid pattern to ensure comprehensive coverage of the 2.5 ha study area at Chungbuk National University Arboretum, located in Cheongju-si, Chungcheongbuk-do, Republic of Korea (36°37′44″N, 127°27′13″E) (Fig. 1).

Image overlap was set to 70% along the flight path and 80% between flight lines to facilitate accurate orthomosaic generation. The drone was flown at an altitude of 80 meters above ground level, maintaining a speed of 3.9 m/s. These flight parameters resulted in a Ground Sampling Distance (GSD) of 2.7 cm/px, providing high-resolution imagery suitable for detailed analysis of individual tree crowns. The acquisition of high-resolution imagery that allows for the analysis of individual tree crowns is a key feature of this study’s approach to tree type classification.

2.4. Image Preprocessing and Orthomosaic Generation

The drone-acquired RGB images were processed using Pix4DMapper V4.8.4 software (Pix4D SA, Lausanne, Switzerland) to generate a geometrically corrected orthomosaic. This orthomosaic provided a seamless and accurate spatial representation of the study area, essential for subsequent tree crown segmentation and feature extraction. Four Ground Control Points (GCPs) were established within the arboretum, and their coordinates were precisely measured using a GPS-RTK V30 (HI-Target, Guangzhou, China), as shown in Fig. 1. These GCPs were evenly distributed across the study area and marked with high-contrast targets visible in the drone imagery. They were used to geometrically correct the orthomosaic, ensuring accurate spatial alignment and minimizing distortions. Image processing was performed on a workstation equipped with an Intel (R) Core (TM) i7-8700 processor, an NVIDIA Quadro M4000 graphics card, and 64GB of RAM. This configuration provided the computational power necessary for efficient processing of the high-resolution drone imagery. The Root Mean Square Error (RMSE) of the geometric correction was less than 5 cm, indicating high geometric accuracy of the orthomosaic.

2.5. Crown Segmentation

Individual tree crowns were manually segmented from the orthomosaic using QGIS V3.36.1 software (Open Source Geospatial Foundation). This segmentation step isolated tree crowns from the background and other features in the imagery, enabling accurate extraction of tree-specific characteristics for subsequent analysis. The high spatial resolution of the orthomosaic facilitated the precise delineation of crown boundaries.

Visual interpretation guided the segmentation process, leveraging cues such as crown shape, texture, color, and shadows cast by the trees. Each polygon was carefully drawn to encompass the entire crown while minimizing the inclusion of non-crown pixels. Non-tree features, such as shadows, shrubs, and grasslands, were excluded during the segmentation process.

To ensure consistency, a single expert operator performed all crown delineations. A subset of the segmented crowns (10%) was independently reviewed by another experienced operator to assess the accuracy and consistency of the segmentation. The agreement between the two operators was high, with an average Intersection over Union (IoU) of 0.85. The resulting segmented crown polygons, saved in Shapefile format, served as the basis for extracting textural and spectral information for each tree.

2.6. Field Survey and Data Labeling

A field survey was conducted on May 11, 2024, to collect ground truth data for tree type classification. This survey aimed to provide accurate labels for the segmented tree crowns, essential for training and validating the SVM classification model.

The electronic tree map provided by Chungbuk National University Academic Forest and the Forest Spatial Information Service operated by the Korea Forest Service were used as references to identify potential areas with target tree types initially. The target area was divided into four sections (as shown in Fig. 1), and each was systematically surveyed. On-site analysis of tree crowns was performed to classify each as either coniferous (labeled as 0) or broadleaf (labeled as 1). The identification was based on visual inspection of leaf morphology, branching patterns, and overall crown shape. For each tree, the species, Diameter at Breast Height (DBH), and tree height were also recorded. A total of 424 trees were identified and labeled, with 158 coniferous and 266 broadleaf trees.

These labels were then integrated into the crown segmentation shapefile using QGIS, creating a database linking each segmented crown to its corresponding tree type and other field-measured attributes. This database served as the ground truth for subsequent model training and evaluation.

2.7. Texture Analysis Using GLCM

To capture potentially discriminatory textural information within individual tree crowns, we employed the GLCM method. GLCM is a well-established technique for quantifying spatial relationships between pixel intensities (Haralick et al., 1973). While spectral data (RGB) often dominate vegetation classification, texture analysis can provide complementary information, particularly regarding crown morphology and internal structure (Dian et al., 2015).

For each segmented tree crown, normalized GLCMs were generated for each RGB band using the scikit-image library in Python (van der Walt et al., 2014). A 5 × 5 pixel window, a distance of 1 pixel, and four orientations (0°, 45°, 90°, 135°) were used to capture local textural variations. These parameters were chosen based on preliminary testing and considering computational efficiency and capturing sufficient textural detail, given the image resolution (2.7 cm/px). From each GLCM, eight statistical texture features were calculated: MN, VE, HY, CT, ASM, DY, EY, CN (Table 1). These features represent different aspects of texture, such as overall brightness (MN), local variation (VE, CT, DY), smoothness (HY, ASM), randomness (EY), and linear dependencies (CN). The average value of each texture feature across the four orientations was then calculated, resulting in eight texture features per RGB band, for a total of 24 texture features per tree crown. This approach provides a comprehensive yet computationally manageable representation of crown texture. These GLCM-derived features, along with the mean RGB values, were subsequently used as input for the SVM classifier.

Table 1 . Selected GLCM TFs and equations.

Variable (Abbreviation)Equation
Mean (MN)MN= i,j=0 N1i×P(i,j)
Variance (VE)VE= i,j=0 N1(iμ)2×P(i,j)
Homogeneity (HY)HY= i,j=0 N1P(i,j)1+ (ij)2
Contrast (CT)CT= i,j=0 N1(ij)2×P(i,j)
Angular Second Moment (ASM)ASM= i,j=0 N1P2(i,j)
Dissimilarity (DY)DY= i,j=0 N1P(i,j)×ij
Entropy (EY)EY= i,j=0 N1P(i,j)×InP(ij)
Correlation (CN)CN= i,j=0 N1(iμ)×(jμ)×P(i,j)σ2


2.8. Tree Classification Using SVM

We employed the SVM algorithm for tree type classification (coniferous vs. broadleaf). SVM is a supervised learning method known for its effectiveness in high-dimensional spaces and with limited training data (Zhou et al., 2016; Foody and Mathur, 2004). SVMs identify an optimal hyperplane that maximizes the margin between classes, making them robust to complex data distributions, including those encountered in remote sensing image analysis (Patel and Chouhan, 2013). Our implementation utilized the scikit-learn library in Python (Pedregosa et al., 2011).

To explore the performance of different decision boundaries, we evaluated four SVM kernel functions: Linear, Polynomial, Radial Basis Function (RBF), and Sigmoid. Each kernel’s performance is influenced by hyperparameters: the cost parameter (C), controlling misclassification penalty, and, for non-linear kernels, the gamma parameter (γ), influencing the decision boundary’s curvature. To optimize each kernel, a grid search with 5-fold cross-validation was performed using GridSearchCV (scikit-learn). C values ranged from 0.1 to 10, and γ values from 0.001 to 1. The optimal hyperparameters were selected based on the highest mean crossvalidated accuracy.

The input feature vector for each tree crown consisted of 11 variables: the mean RGB values and the average of eight GLCMderived texture features (calculated across four orientations, as described in Section 2.7). The labeled dataset (424 trees) was split into training (70%, 297 trees) and testing (30%, 127 trees) sets. Following model training, Permutation Importance was calculated to quantify the relative contribution of each feature to classification accuracy (Breiman, 2001). This involved randomly shuffling each feature’s values 10 times and measuring the average decrease in model accuracy. This analysis informed our understanding of the key drivers of classification performance. The final model selection was based on a combination of classification accuracy (on the test set) and computational efficiency (training time).

2.9. Acquisition and Use of Existing Forest Map Data

The forest map utilized in this study was obtained from the Forest Geospatial Information System (FGIS, https://map.forest.go.kr/ forest), operated by the Korea Forest Service. This map served as a reference dataset for the study area, aiding in field survey planning and providing a general overview of the forest types present within the arboretum. The 1:5,000 scale map was derived from aerial photograph interpretation and field investigations conducted during the 5th National Forest Inventory (2006– 2010) (Korea Forest Service, 2013). It classifies forests into five categories: coniferous (1), deciduous (2), mixed (3), bamboo (4), and uncultivated/non-forest (0). The specific area corresponding to the research site was extracted from the larger forest map. This map, acquired in Shapefile format, served as a spatial reference for ground truth data collection and provided a baseline for understanding the forest composition within the study area. It is important to note that this map was primarily used for general reference and not for direct validation of our classification results due to differences in spatial resolution, data acquisition time, and classification methods.

2.10. Hyperparameter Selection

SVM classification involves selecting appropriate hyperparameters to optimize model performance. These hyperparameters include the cost parameter (C) and the gamma parameter (γ), which control the penalty for misclassification and the decision boundary’s curvature, respectively. We employed a 5-fold cross-validation technique using GridSearchCV from the scikit-learn library to identify the optimal hyperparameters for each SVM kernel. The selection process aimed to maximize the model’s performance by exploring various combinations of C and γ values, ultimately choosing the combination that yielded the highest cross-validated accuracy.

The grid search revealed that different kernels achieved their best performance with different C and γ values. The linear kernel achieved its highest accuracy with C=1, while the polynomial kernel performed optimally with C=1 and γ=0.1. The RBF kernel achieved its best performance with C=10and γ=0.01, and the sigmoid kernel with C=0.1 and γ=0.001. These optimal hyperparameter combinations were then used for the subsequent classification and evaluation steps.

2.11. Accuracy Evaluation

The performance of the SVM models was rigorously evaluated using a held-out test set (30% of the data) to provide an unbiased assessment of their generalization capability. We used several metrics to assess the model performance, including overall accuracy, precision, recall, and the F1-score (Table 2). Overall accuracy represents the proportion of correctly classified trees across both coniferous and broadleaf classes. Precision quantifies the accuracy of positive predictions, while recall measures the completeness of positive predictions. The F1-score, the harmonic mean of precision and recall, provides a balanced measure, particularly relevant given the slight class imbalance in our dataset.

Table 2 . Performance metrics and calculation formulas.

IndexFormula
AccuracyTP+TNTP+FN+FP+TN
PrecisionTPTP+FP
RecallTPTP+FN
F1-score2×Precision×RecallPrecision+Recall


A confusion matrix was generated to analyze classification errors further, visualizing the distribution of true positives, true negatives, false positives, and false negatives. This allowed for a detailed examination of misclassification patterns between coniferous and broadleaf trees. In addition to these test-set metrics, 5-fold cross-validation was performed during model training to ensure robustness and mitigate overfitting to the training data. The model exhibiting the highest F1-score on the test set, after hyperparameter optimization using cross-validation, was selected as the final classification model. This selection prioritized a balance between precision and recall, which is crucial for reliable application in urban forest management.

3. Results and Disccusion

3.1. Performance of SVM Kernels and Feature Combinations

We evaluated four SVM kernels (Linear, RBF, Polynomial, and Sigmoid) with various feature combinations to identify the optimal model for classifying coniferous and broadleaf trees, focusing on both accuracy and computational efficiency (Table 3). The RBF kernel, after hyperparameter optimization (C=10, γ=0.01), achieved the highest overall accuracy (99%) and F1- score (0.99), indicating excellent performance in separating the two tree types. The Linear kernel also performed exceptionally well (99% accuracy, 0.99 F1-score), albeit with a considerably longer training time (96.8s vs. 5.0s for RBF). The Polynomial and Sigmoid kernels exhibited slightly lower performance.

Table 3 . Performancecomparison of SVM kernels for tree type classification using RGB and GLCM.

KernelCostGammaCross-validated Accuracy (SD)Training Time (s)AccuracyPrecisionRecallF1-score
RBF100.010.97 (0.02)5.00.990.981.000.99
Linear10.010.98 (0.01)96.80.990.981.000.99
Polynomial0.01100.95 (0.03)3.40.981.000.960.98
Sigmoid1000.010.95 (0.02)2.40.950.960.960.96


Further analysis investigated the contribution of individual RGB bands and GLCM texture features (Variance and Homogeneity) (Tables 4 and 5). Consistently, combining all three RGB bands (Red, Green, Blue) yielded the highest accuracies across both Linear and RBF kernels. Adding GLCM features did not improve, and in some cases slightly decreased, classification accuracy, while increasing training time. Normalized difference indices performed poorly. This demonstrates that, for this specific classification task and dataset, spectral information from the RGB bands was paramount, and the added complexity of texture features was not beneficial. This finding contrasts with some studies where texture features improved vegetation classification but aligns with others emphasizing the dominance of spectral information (Fassnacht et al., 2016). The limited contribution of GLCM features here might be due to the relatively homogeneous forest structure or the spatial resolution of the RGB imagery, which may not have been optimal for capturing the relevant textural variations at the tree crown level.

Table 4 . Performance of the linear SVM kernel for tree type classification using different combinations of RGB color bands and GLCM TFs.

Feature CombinationCostGammaAccuracyTraining Time (s)
B10.010.94221.2
B + R0.10.010.899684.1
B + G1000.010.963374.7
B + R + G1000.010.983452.7
RGB + V0.010.010.979438.0
RGB + H1000.010.979310.3
RGB + V + H0.010.010.976192.2
(B – R) / (B + R)0.10.010.9421819.9
(B – G) / (B + G)1000.010.95929.4
(R – G) / (R + G)0.010.010.9421887.2


Table 5 . Performance of the RBF SVM kernel for tree type classification using different combinations of RGB color bandsand GLCM TFs.

Feature CombinationCostGammaAccuracyTraining Time (s)
B0.110.89987.4
B + R1.0100.90514.4
B + G1010.9737.4
B + R + G100.010.9867.0
RGB + V100.0010.9835.0
RGB + H1000.0010.9866.7
RGB + V + H1000.0010.9867.3
(B – R) / (B + R)100.010.94926.1
(B – G) / (B + G)0.10.010.95929.4
(R – G) / (R + G)10.010.94930.2


3.2. Confusion Matrix Analysis

Fig. 3 presents the confusion matrix for the best-performing model (RBF kernel, C=10, γ=0.01). The matrix reveals that the model achieved high classification accuracy for both coniferous (Precision=0.98, Recall=0.95) and broadleaf (Precision=0.96, Recall=0.99) trees. However, some misclassifications did occur. Specifically, 3 broadleaf trees were misclassified as coniferous (false negative), and 1 coniferous tree was misclassified as broadleaf (false positive). These misclassifications may be attributed to several factors, including overlapping tree crowns, which can create mixed pixels containing spectral information from both tree types. Additionally, variations in illumination and shadow within the canopy can affect the spectral signatures of individual trees. Furthermore, the limited spectral resolution of RGB imagery may hinder the differentiation of species with subtle spectral differences, especially if those differences manifest in wavelengths outside the visible spectrum. Finally, edge effects at the boundaries of tree crowns could also contribute to misclassification, as these areas may contain a mix of tree and background pixels.

Figure 3. Confusion matrix for RBF SVM classification accuracy evaluation (Predicted Class / True Class).

3.3. Feature Importance Analysis

We assessed the relative importance of each feature using the Permutation Importance method (Breiman, 2001) to understand the drivers of classification accuracy. Fig. 4 demonstrates the dominance of spectral information (RGB bands) over textural features (GLCM) in distinguishing between coniferous and broadleaf trees in our study area. The blue band exhibited the highest importance (0.38), followed by red (0.23) and green (0.20), indicating that color differences were the primary discriminatory factors. This prominence of the blue band may reflect species-specific variations in pigment concentrations or leaf structural properties that influence reflectance in this portion of the spectrum. However, further physiological investigation would be needed to confirm this.

Figure 4. Relative feature importance in SVM model using permutation importance.

Among the GLCM features, Variance (0.03) showed the highest, albeit limited, importance. All other GLCM features (Homogeneity, Dissimilarity, Correlation, Mean, ASM, Contrast, and Entropy) exhibited negligible importance scores (≤ 0.02). This reinforces the finding that, for this specific classification task and dataset, textural information provided minimal additional value beyond the RGB data. The limited contribution of GLCM features could be attributed to several factors, including the relatively homogeneous structure of the two tree types at the scale captured by the 2.7 cm/px resolution imagery, the use of RGB imagery, which lacks the spectral detail of multispectral or hyperspectral data, or the chosen GLCM parameters (5 × 5 window, 1-pixel distance), which might not have been optimal for capturing relevant textural variations. This finding, while contrasting with some studies that found texture beneficial (Dian et al., 2015), highlights the context-dependency of feature importance in remote sensing classification.

3.4. Comparison with Existing Forest Maps

We compared our drone-derived classification results with the existing forest map from the Korea Forest Service’s Forest Geospatial Information System (FGIS) (Fig. 5) to assess their alignment and highlight the potential advantages of our dronebased approach. The FGIS map, produced at a 1:5,000 scale using aerial photograph interpretation and field data from the 5th National Forest Inventory (2006-2010), provides a broad categorization of forest types. This represents a fundamentally different level of detail and temporal currency compared to our drone-based approach.

Figure 5. Comparison of drone and FGIS-based forest type maps.

The most striking difference is the level of detail. The FGIS map predominantly classifies the arboretum as a mixed forest, reflecting its coarse resolution and inability to resolve individual tree types. In contrast, our drone-derived map, with its 2.7 cm/px resolution, reveals a much more heterogeneous and nuanced distribution of coniferous and broadleaf trees. Areas designated as “mixed” by the FGIS were often found to be dominated by either coniferous or broadleaf trees, highlighting the limitations of the FGIS map for precise urban forest management. This discrepancy is not simply a matter of classification accuracy, but a fundamental difference in the scale of observation. The FGIS provides a regional overview, while our method provides a treelevel inventory. Furthermore, the FGIS map’s age (data from 2006–2010) means it cannot reflect changes due to management practices, natural disturbances, growth, or mortality that have occurred in the intervening 15+ years. This temporal mismatch underscores the need for regularly updated, high-resolution mapping for effective urban forest management.

3.5. Processing Time and Efficiency

A key advantage of the proposed methodology is its computational efficiency. The entire workflow, from drone image acquisition to final classification using the optimized RBF kernel SVM model (C=10, γ=0.01), required approximately 12 hours (Table 6). This included 30 minutes for image acquisition, 6 hours for orthomosaic generation, 4 hours for manual crown segmentation, and 2 hours for feature extraction and model training/validation. While processing time is dependent on factors such as study area size, hardware, and software settings, this 12-hour timeframe represents a significant improvement over traditional field-based methods, which could take weeks or months to cover a comparable area with the same level of detail. Furthermore, although a direct comparison with other machine learning methods was outside the scope of this study, the computational demands of our SVM approach are considerably lower than those typically associated with deep learning models applied to high-resolution imagery (LeCun et al., 2015; Ma et al., 2024). This makes our method a practical and accessible solution, particularly for resourceconstrained urban forest management agencies. The most timeconsuming steps were orthomosaic generation and crown segmentation. Future work could explore automated or semiautomated methods for these tasks to further improve efficiency.

Table 6 . Processing time for key steps in the workflow.

StepTime (Hours)
Image Acquisition0.5
Orthomosaic Generation6
Crown Segmentation4
Feature Extraction and Model Training2
Total12


3.6. Usability of Drone Images in Urban Forest Management

This study demonstrated the significant potential of droneacquired RGB imagery for urban forest management. The drone imagery provided high-resolution data with a GSD of approximately 2.7 cm/px, efficiently acquired in terms of both time and cost (Fig. 6a). This level of detail facilitated the accurate classification of 424 individual trees as either broadleaf (n=266) or coniferous (n=158), as shown in Fig. 6b). The entire process, including image acquisition, processing, and classification, took only 12 hours, highlighting its efficiency compared to traditional methods.

Figure 6. Comparison of Tree Type Classification: Drone Imagery vs. Forest Service Map. (a) Drone RGB orthomosaic of the study area at Chungbuk National University Arboretum. (b) Tree type classification map derived from drone imagery and field surveys (this study). (c) Tree type classification map from the Korea Forest Service Forest Spatial Information Service (2006–2010 data). (d) Proportional Distribution of Broadleaf and Coniferous Trees within Mixed Forest Stands (FGIS-based).

Comparison with the Korea Forest Service’s existing forest map (Fig. 6c) highlighted the limitations of traditional methods, particularly their inability to provide detailed and up-to-date species-level classification in mixed forest stands. While the forest map showed general agreement in the location of coniferous and broadleaf stands, it was based on data from the 5th National Forest Inventory (2006-2010) and, therefore, did not accurately reflect the current forest composition, being over 15 years out of date. This discrepancy underscores a key finding of this study: drone imagery offers a significant advantage by providing current, accurate, and detailed data for urban forest monitoring (Fig. 6b). This capability to capture the dynamic nature of urban forests, which can change rapidly due to management practices, natural disturbances, or development, demonstrates the potential of drone technology to overcome the limitations of outdated forest maps. These results support the growing body of evidence that advocates for the use of drones as a valuable tool in urban forestry, providing objective and precise data essential for effective management and planning.

3.7. Discussion of GLCM Feature Performance

While our initial hypothesis suggested that GLCM-derived TFs would enhance classification accuracy, the results indicate that their contribution was limited in this specific context. The permutation importance analysis (Fig. 4) and the model performance comparison (Tables 4 and 5) both suggest that the RGB color bands provided sufficient information for accurate classification, and the addition of GLCM features did not yield significant improvements. This finding is consistent with some previous studies that have found spectral information to be more important than texture for classifying certain tree species or forest types (Fassnacht et al., 2016). However, it contrasts with other studies where TFs have been shown to improve classification accuracy (Dian et al., 2015).

Several factors may explain the limited impact of GLCM features in our study. First, the relatively homogeneous structure of the urban forest within the arboretum, dominated by only two main types, may have resulted in limited textural variations detectable by GLCM at the spatial resolution of the imagery (2.7 cm/px). Second, the use of RGB imagery, which has a coarser spectral resolution compared to multispectral or hyperspectral data, might have restricted the ability of GLCM to capture subtle textural differences relevant to species differentiation. Third, the specific parameters used for GLCM calculation, such as the window size (5 × 5 pixels) and distance (1 pixel), might not have been optimal for capturing the relevant textural variations in this particular dataset. While a 5 × 5 window is often used as a starting point in texture analysis, different window sizes can capture different scales of texture, and the optimal window size depends on the spatial resolution of the imagery and the size of the objects being analyzed. Similarly, the choice of distance influences the scale of texture being captured, and a distance of 1 pixel might be too small to capture meaningful textural variations in tree crowns, especially at a relatively fine spatial resolution. It is also worth noting that averaging the GLCM features across four orientations might have obscured some direction-dependent textural differences that could be relevant for classification. Exploring different parameter combinations, including different window sizes, distances, and orientations, could potentially enhance the contribution of GLCM features in future studies.

3.8. Implications for Urban Forest Management

The high accuracy and efficiency of the drone-based RGB imagery and SVM approach demonstrated in this study have significant implications for urban forest management. The ability to rapidly and accurately map tree types, even with readily available RGB data, provides a valuable tool for: A) Inventory and Monitoring: Creating detailed and up-to-date inventories of urban forest resources, tracking changes in forest composition over time, and monitoring the effectiveness of management interventions. B) Species-Specific Management: Developing and implementing species-specific management plans, such as selecting appropriate species for planting based on site conditions and management objectives, and managing for desired species diversity. C) Pest and Disease Management: Potentially identifying early signs of stress or disease based on changes in spectral characteristics, although this would require further research using multispectral or hyperspectral data. D) Ecosystem Service Assessment: Providing accurate data for quantifying and mapping ecosystem services, such as air purification, carbon sequestration, and temperature regulation, which can inform urban planning and policy decisions. E) Risk Assessment: Identifying trees that may pose risks to infrastructure or public safety, particularly in combination with LiDAR data for structural assessments (Ma et al., 2024).

Our findings highlight the limitations of relying solely on existing forest maps, such as those provided by the Korea Forest Service, which may be outdated or lack the necessary spatial detail for effective urban forest management. The 1:5000 scale FGIS map, while useful for regional planning, does not capture the fine-scale heterogeneity of urban forests and may not reflect recent changes in forest composition. The drone-based approach presented here offers a more accurate and timelier alternative for mapping and monitoring urban forests, enabling managers to make more informed decisions and optimize the benefits provided by these valuable green spaces.

3.9. Limitations and Future Research

Despite the promising results, this study has several limitations. First, the study was conducted in a relatively small and homogeneous urban forest with only two dominant tree types (coniferous and broadleaf). Further research is needed to evaluate the generalizability of the method to other urban forests with greater species diversity and structural complexity. Second, the study relied on RGB imagery, which has limited spectral resolution compared to multispectral or hyperspectral data. While RGB data proved sufficient for this specific classification task, future studies should explore the potential of multispectral or hyperspectral sensors for more detailed species identification and health assessment (Jeong et al., 2024). Third, the study was conducted during a single season (May). Seasonal variations in tree phenology can affect spectral and textural characteristics, and further research is needed to assess the performance of the method across different seasons (Lee et al., 2022). Fourth, although we achieved high classification accuracy, the manual segmentation of tree crowns remains a time-consuming step. Future research should investigate automated crown delineation methods to further improve the efficiency of the workflow.

Future research should also focus on addressing the limitations of GLCM texture features identified in this study. Investigating different GLCM parameters, such as window size, distance, and orientation, as well as exploring other texture descriptors, could potentially enhance their contribution to classification accuracy. Additionally, integrating data from multiple sensors, such as LiDAR and hyperspectral imagery (Zhang and Qiu, 2012), could provide a more comprehensive characterization of urban forests and improve the accuracy of species identification and health assessment. Finally, while our study demonstrated the efficiency of the SVM approach compared to traditional methods and potentially some deep learning applications, further research is needed to directly compare the performance and computational costs of SVM with other machine learning and deep learning algorithms for urban forest mapping using drone-based RGB imagery (Feng et al., 2015; Im et al., 2020).

3.10. Comparison with Other Machine Learning Methods

While a direct comparison with other machine learning methods was not the primary focus of this study, it is important to consider the strengths and weaknesses of SVMs in relation to other popular techniques, such as Random Forest and CNNs, based on existing literature (Weinstein et al., 2020). Random Forests are known for their robustness, ability to handle high-dimensional data, and relatively low computational cost (Belgiu and Drăguţ, 2016). However, they may not always achieve the same level of accuracy as SVMs, particularly when dealing with complex datasets and subtle differences between classes (Foody and Mathur, 2004). CNNs, a type of deep learning algorithm, have demonstrated impressive results in image classification tasks, including tree species identification (Grabska et al., 2020). They can automatically learn hierarchical features from raw data, potentially eliminating the need for manual feature engineering. However, CNNs typically require large amounts of training data and significant computational resources, which can be a limitation for some applications (LeCun et al., 2015).

In contrast, SVMs, especially with linear kernels, can be more computationally efficient and perform well even with limited training data, as demonstrated in our study. The choice of the most suitable method depends on various factors, including the specific research question, the characteristics of the dataset, the available computational resources, and the desired level of accuracy. Our study suggests that for classifying coniferous and broadleaf trees in urban forests using drone-based RGB imagery, SVMs offer a practical and effective solution, achieving high accuracy with relatively low computational demands.

4. Conclusions

This study successfully developed and evaluated an efficient and accurate method for classifying coniferous and broadleaf trees in an urban forest using drone-acquired RGB imagery and an SVM classifier. The RBF kernel, optimized with a cost parameter of 10 and a gamma of 0.01, achieved the highest overall accuracy (99%) and F1-score (0.99), demonstrating the effectiveness of this approach. Notably, the entire process, from image acquisition to classification, took only about 12 hours, highlighting its practicality for real-world applications.

Analysis of feature importance revealed that RGB color bands, particularly the blue band, were the most significant discriminators between the two tree types. While initially hypothesized to enhance classification, GLCM texture features provided limited additional information in this specific context. This may be due to the relatively homogeneous forest structure, the coarser spectral resolution of RGB imagery compared to other sensors, and the chosen GLCM parameters.

Comparison with the Korea Forest Service’s 1:5000 scale forest map, derived from older data (2006-2010), underscored the limitations of traditional methods for capturing the fine-scale heterogeneity and dynamic nature of urban forests. Our drone-based approach provides a more detailed and current assessment, essential for effective management.

This research offers a practical and cost-effective approach to urban forest mapping, with broader implications for South Korea and other rapidly urbanizing regions. By providing timely and accurate data, this method can contribute to more informed decision-making, improved management practices, and ultimately, the creation of healthier and more resilient urban green spaces. Future research should explore the potential of multispectral or hyperspectral sensors to improve species-level identification and assess tree health. Deep learning models and the incorporation of phenological stages could also be investigated.

Acknowledgments

None

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Fig 1.

Figure 1.Location and characteristics of the study area, the Chungbuk National University Arboretum in Cheongju-si, South Korea.
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Fig 2.

Figure 2.Workflow for tree type classification using drone RGB imagery and SVM classifier.
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Fig 3.

Figure 3.Confusion matrix for RBF SVM classification accuracy evaluation (Predicted Class / True Class).
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Fig 4.

Figure 4.Relative feature importance in SVM model using permutation importance.
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Fig 5.

Figure 5.Comparison of drone and FGIS-based forest type maps.
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Fig 6.

Figure 6.Comparison of Tree Type Classification: Drone Imagery vs. Forest Service Map. (a) Drone RGB orthomosaic of the study area at Chungbuk National University Arboretum. (b) Tree type classification map derived from drone imagery and field surveys (this study). (c) Tree type classification map from the Korea Forest Service Forest Spatial Information Service (2006–2010 data). (d) Proportional Distribution of Broadleaf and Coniferous Trees within Mixed Forest Stands (FGIS-based).
Korean Journal of Remote Sensing 2025; 41: 209-223https://doi.org/10.7780/kjrs.2025.41.1.17

Table 1 . Selected GLCM TFs and equations.

Variable (Abbreviation)Equation
Mean (MN)MN= i,j=0 N1i×P(i,j)
Variance (VE)VE= i,j=0 N1(iμ)2×P(i,j)
Homogeneity (HY)HY= i,j=0 N1P(i,j)1+ (ij)2
Contrast (CT)CT= i,j=0 N1(ij)2×P(i,j)
Angular Second Moment (ASM)ASM= i,j=0 N1P2(i,j)
Dissimilarity (DY)DY= i,j=0 N1P(i,j)×ij
Entropy (EY)EY= i,j=0 N1P(i,j)×InP(ij)
Correlation (CN)CN= i,j=0 N1(iμ)×(jμ)×P(i,j)σ2

Table 2 . Performance metrics and calculation formulas.

IndexFormula
AccuracyTP+TNTP+FN+FP+TN
PrecisionTPTP+FP
RecallTPTP+FN
F1-score2×Precision×RecallPrecision+Recall

Table 3 . Performancecomparison of SVM kernels for tree type classification using RGB and GLCM.

KernelCostGammaCross-validated Accuracy (SD)Training Time (s)AccuracyPrecisionRecallF1-score
RBF100.010.97 (0.02)5.00.990.981.000.99
Linear10.010.98 (0.01)96.80.990.981.000.99
Polynomial0.01100.95 (0.03)3.40.981.000.960.98
Sigmoid1000.010.95 (0.02)2.40.950.960.960.96

Table 4 . Performance of the linear SVM kernel for tree type classification using different combinations of RGB color bands and GLCM TFs.

Feature CombinationCostGammaAccuracyTraining Time (s)
B10.010.94221.2
B + R0.10.010.899684.1
B + G1000.010.963374.7
B + R + G1000.010.983452.7
RGB + V0.010.010.979438.0
RGB + H1000.010.979310.3
RGB + V + H0.010.010.976192.2
(B – R) / (B + R)0.10.010.9421819.9
(B – G) / (B + G)1000.010.95929.4
(R – G) / (R + G)0.010.010.9421887.2

Table 5 . Performance of the RBF SVM kernel for tree type classification using different combinations of RGB color bandsand GLCM TFs.

Feature CombinationCostGammaAccuracyTraining Time (s)
B0.110.89987.4
B + R1.0100.90514.4
B + G1010.9737.4
B + R + G100.010.9867.0
RGB + V100.0010.9835.0
RGB + H1000.0010.9866.7
RGB + V + H1000.0010.9867.3
(B – R) / (B + R)100.010.94926.1
(B – G) / (B + G)0.10.010.95929.4
(R – G) / (R + G)10.010.94930.2

Table 6 . Processing time for key steps in the workflow.

StepTime (Hours)
Image Acquisition0.5
Orthomosaic Generation6
Crown Segmentation4
Feature Extraction and Model Training2
Total12

References

  1. Belgiu, M., and Drăguţ, L., 2016. Random forest in remote sensing: A review of applications and future directions. ISPRS Journal of Photogrammetry and Remote Sensing, 114, 24-31. https://doi.org/10.1016/j.isprsjprs.2016.01.011
  2. Breiman, L., 2001. Random forests. Machine Learning, 45(1), 5-32.
  3. Chehreh, B., Moutinho, A., and Viegas, C., 2023. Latest trends on tree classification and segmentation using UAV data-A review of agroforestry applications. Remote Sensing, 15(9), 2263. https:// doi.org/10.3390/rs15092263.
  4. Dian, Y., Li, Z., and Pang, Y., 2015. Spectral and texture features combined for forest tree species classification with airborne hyperspectral imagery. Journal of the Indian Society of Remote Sensing, 43, 101-107. https://doi.org/10.1007/s12524-014-0392-6
  5. Dwyer, J. F., McPherson, E. G., Schroeder, H. W., and Rowntree, R. A., 1992. Assessing the benefits and costs of the urban forest. Arboriculture & Urban Forestry, 18(5), 227-234. https://doi.org/10.48044/jauf.1992.045
  6. Fassnacht, F. E., Latifi, H., Stereńczak, K., Modzelewska, A., Lefsky, M., Waser, L. T., Straub, C., and Ghosh, A., 2016. Review of studies on tree species classification from remotely sensed data. Remote Sensing of Environment, 186, 64-87. https://doi.org/10.1016/j.rse.2016.08.013
  7. Feng, Q., Liu, J., and Gong, J., 2015. UAV remote sensing for urban vegetation mapping using random forest and texture analysis. Remote Sensing, 7(1), 1074-1094. https://doi.org/10.3390/rs70101074
  8. Foody, G. M., and Mathur, A., 2004. A relative evaluation of multiclass image classification by support vector machines. IEEE Transactions on Geoscience and Remote Sensing, 42(6), 1335-1343. https://doi.org/10.1109/TGRS.2004.827257
  9. Grabska, E., Paluba, D., Fraczyk, P., and Twardowski, M., 2020. A review on deep learning methods for urban trees and road detection in remote sensing images. Remote Sensing, 12(9), 1484. https://doi.org/10.3390/rs12091484
  10. Gu, J., Grybas, H., and Congalton, R. G., 2020. Individual tree crown delineation from UAS imagery based on region growing and growth space considerations. Remote Sensing, 12(15), 2363. https://doi.org/10.3390/rs12152363
  11. Guo, Q., Zhang, J., Guo, S., Ye, Z., Deng, H., Hou, X., and Zhang, H., 2022. Urban tree classification based on object-oriented approach and random forest algorithm using unmanned aerial vehicle (UAV) multispectral imagery. Remote Sensing, 14(16), 3885. https://doi.org/10.1016/j.ufug.2020.126958
  12. Haralick, R. M., Shanmugam, K., and Dinstein, I. H., 1973. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3(6), 610-621. https://doi.org/10.1109/TSMC.1973.4309314
  13. Im, J., Rhee, J., and Jensen, J. R., 2020. Mapping urban tree cover using deep learning and multi-source remote sensing data. Remote Sensing, 12(15), 2413. https://doi.org/10.3390/rs12152413
  14. Jeong, K. S., Go, S. H., Lee, K. K., and Park, J. H., 2024. Analyzing soybean growth patterns in open-field smart agriculture under different irrigation and cultivation methods using drone-based vegetation indices. Korean Journal of Remote Sensing, 40(1), 93-101. https://doi.org/10.7780/kjrs.2024.40.1.5
  15. Korea Forest Service, 2013. Forest spatial information service. Available online: https://map.forest.go.kr/forest (accessed on May 8, 2024)
  16. LeCun, Y., Bengio, Y., and Hinton, G., 2015. Deep learning. Nature, 521(7553), 436-444. https://doi.org/10.1038/nature14539
  17. Lee, H. J., Go, S. H., and Park, J. H., 2022. Assessment of lodged damage rate of soybean using support vector classifier model combined with drone based RGB vegetation indices. Korean Journal of Remote Sensing, 38(6-1), 1489-1503. https://doi.org/10.7780/kjrs.2022.38.6.1.37
  18. Ma, B., Hauer, R. J., Östberg, J., Koeser, A. K., Wei, H., and Xu, C., 2021. A global basis of urban tree inventories: What comes first the inventory or the program. Urban Forestry & Urban Greening, 60, 127087. https://doi.org/10.1016/j.ufug.2021.127087
  19. Ma, Y., Zhao, Y., Im, J., Zhao, Y., and Zhen, Z., 2024. A deep-learningbased tree species classification for natural secondary forests using unmanned aerial vehicle hyperspectral images and LiDAR. Ecological Indicators, 159, 111608. https://doi.org/10.1016/j.ecolind.2024.111608
  20. Nowak, D. J., Crane, D. E., and Stevens, J. C., 2006. Air pollution removal by urban trees and shrubs in the United States. Urban Forestry &Urban Greening, 4(3-4), 115-123. https://doi.org/10.1016/j.ufug.2006.01.007
  21. Nowak, D. J., and Dwyer, J. F., 2007. Understanding the benefits and costs of urban forest ecosystems. In: Kuser, J. E., (ed.), Urban and community forestry in the northeast, Springer, pp. 25-46. https://doi.org/10.1007/978-1-4020-4289-8_2
  22. Park Green Space Act, 2020. Act on Urban Parks and Green AREAS. Available online: https://elaw.klri.re.kr/eng_service/lawViewContent.do?hseq=6905 (accessed on Jan. 3, 2025)
  23. Patle, A., and Chouhan, D. S., 2013. SVM kernel functions for classification. In Proceedings of the 2013 International Conference on Advances in Technology and Engineering (ICATE), Mumbai, India, Jan. 23-25, pp. 1-9. https://doi.org/10.1109/ICAdTE.2013.6524743
  24. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., and Duchesnay, É., 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12, 2825-2830. http://jmlr.org/papers/v12/pedregosa11a.html
  25. Shivaprakash, K. N., Swami, N., Mysorekar, S., Arora, R., Gangadharan, A., Vohra, K., Jadeyegowda, M., and Kiesecker, J. M., 2022. Potential for artificial intelligence (AI) and machine learning (ML) applications in biodiversity conservation, managing forests, and related services in India. Sustainability, 14(12), 7154. https://doi.org/10.3390/su14127154
  26. van der Walt, S., Schönberger, J. L., Nunez-Iglesias, J., Boulogne, F., Warner, J. D., and Yarkony, J.; the scikit-image contributors, 2014. scikit-image: image processing in Python. PeerJ, 2, e453. https://doi.org/10.7717/peerj.453
  27. Velasquez-Camacho, L., Cardil, A., Mohan, M., Etxegarai, M., Anzaldi, G., and de-Miguel, S., 2021. Remotely sensed tree characterization in urban areas: A review. Remote Sensing, 13(23), 4889. https://doi.org/10.3390/rs13234889
  28. Wang, X., Wang, Y., Zhou, C., Yin, L., and Feng, X., 2021. Urban forest monitoring based on multiple features at the single tree scale by UAV. Urban Forestry & Urban Greening, 58, 126958. https://doi.org/10.1016/j.ufug.2020.126958
  29. Weinstein, B. G., Marconi, S., Bohlman, S., Zare, A., and White, E., 2020. Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks. Remote Sensing, 12(8), 1309. https://doi.org/10.3390/rs12081309
  30. Zhang, C., and Qiu, F., 2012. Mapping individual tree species in an urban forest using airborne Lidar data and hyperspectral imagery. Photogrammetric Engineering &Remote Sensing, 78(10), 1079-1087. https://doi.org/10.14358/PERS.78.10.1079
  31. Zhou, J., Qin, J., Gao, K., and Leng, H., 2016. SVM-based soft classification of urban tree species using very high-spatial resolution remote-sensing imagery. International Journal of Remote Sensing, 37(11), 2541-2559. https://doi.org/10.1080/01431161.2016.1178867
KSRS
February 2025 Vol. 41, No.1, pp. 1-86

Metrics

Share

  • line

Related Articles

Korean Journal of Remote Sensing