1.   Introduction

Accurate tree dimensional parameters, such as crown width, tree height, and diameter at breast height (DBH), are crucial for efficient forest planning, monitoring, and management (Saleh et al., 2023). According to Irundu et al. (2023), these measurements are essential for assessing important forest metrics like carbon stock, biomass, and timber volume.  Manual field surveys remain the primary method of traditional data collection in Indonesia. Despite their general dependability, these approaches often require a substantial amount of time, money, and effort (Mohan et al., 2021).  As a result, there is an urgent need for more economical and practical substitutes that can expedite the acquisition of tree dimension data, especially in expansive or isolated forest regions.

Recent years have witnessed a sharp increase in the use of remote sensing technology, especially Unmanned Aerial Vehicles (UAVs), notably in the forestry industry.  Because of their high-resolution, economical, and adaptable characteristics, UAVs present a possible substitute for gathering forest data (Guimarães et al., 2020).  The assessment of forest volume, biomass, and carbon stock is only a few of the many forest inventory tasks that have made extensive use of these platforms.  The majority of current applications, however, are still based on area-based techniques, which frequently fall short of individual tree-based methods in terms of precision (Dainelli et al., 2021; Guimarães et al., 2020; Schiefer et al., 2020). In response to this limitation, recent research has increasingly focused on developing tree inventory methods based on individual tree detection and segmentation from UAV-derived remote sensing data (Almeida et al., 2021; dos Santos et al., 2019; Mohan et al., 2021).

A variety of sensors and algorithms have been developed to support individual tree detection and segmentation using UAV-based remote sensing technologies (Dainelli et al., 2021; Diez et al., 2021; Hanapi et al., 2019). Commonly utilized sensors include RGB cameras, multispectral sensors, and Light Detection and Ranging (LiDAR) systems, as well as sensor combinations to enhance data quality and analysis outcomes (Dalponte & Coomes, 2016; González-Jaramillo et al., 2019). In parallel, the advancement of detection and segmentation algorithms has significantly improved the accuracy of tree inventory processes. These algorithms include more sophisticated machine learning and deep learning techniques as well as more conventional techniques like Object-Based Image Analysis (OBIA).  Convolutional Neural Networks (CNNs) (Schiefer et al., 2020), Random Forests (Yang et al., 2022), and fully convolutional networks like U-Net are notable examples that have demonstrated good performance in defining individual tree crowns (Ball et al., 2023; Y. Li et al., 2022; Zhang et al., 2022).

Although numerous sensors and algorithms have been developed for individual tree detection and segmentation, several challenges remain. First, the use of advanced sensors such as multispectral and LiDAR systems involves high costs, limiting their accessibility for large-scale or resource-constrained applications (Sun et al., 2023). Second, selecting an appropriate algorithm that consistently achieves high accuracy across diverse forest conditions remains a complex issue (Irlan et al., 2020). Third, deep canopies and heterogeneous flora make it more difficult to apply individual tree segmentation techniques in forests with complicated structures, like Indonesia's natural forests (Nugroho et al., 2022).  The development of more precise and economical techniques for individual tree segmentation is still urgently needed, especially in natural forest ecosystems, given the advantages and disadvantages of current methodologies (Zhang et al., 2024).

In this work, point clouds created from RGB photos of tropical natural forests were used to segment individual trees.  RGB-derived point clouds provide a more affordable option for data collection when compared to other remote sensing technologies as LiDAR (Dell et al., 2019; Goldbergs et al., 2018). In addition, this study utilized a point cloud-based segmentation algorithm, which has demonstrated superior performance over raster-based or hybrid methods in previous research (Irlan et al., 2020). The primary objective of this study is to evaluate the accuracy of individual tree segmentation using RGB-based point clouds in structurally complex tropical forest environments. The findings are expected to support forestry stakeholders in selecting more efficient and accurate methods for acquiring tree dimension data, thereby facilitating improved forest management and monitoring practices.

 

2.    Material and Methods

1)          Research Site

Field measurements were carried out in natural forest areas located within the Malunda Forest Management Unit (FMU) in West Sulawesi Province, Indonesia (see Figure 1). The Malunda FMU encompasses approximately 52,422.81 hectares, comprising a mix of protected forest, production forest, and limited production forest zones. All data processing and analysis were conducted at the Geospatial Information System Laboratory, Integrated Laboratory, Universitas of West Sulawesi.


 

Figure 1. Map of the Research Location


2)          Data Collection

The data used in this study consisted of tree reference data including tree count, coordinates, height, and crown diameter as well as UAV-derived RGB imagery (see Figure 2). Measurement plots were set up inside the UAV flight region to get tree dimension data.  Thirty square plots, each 20 × 20 meters in size, were used.  The trees that were chosen for the plots had to have a minimum stem diameter at breast height (DBH) of more than 10 cm.  With horizontal and vertical accuracy standards of 0.5 and 1.0 meters, respectively, a Trimble R780-2 GNSS receiver was used to record tree coordinates. Hagameter was used to conduct tree height. 614 individual trees' worth of data were successfully gathered in total (see Table 1).


 

Figure 2. The illustration of tree height and crown diameter measurement

Table 1. Field-measured tree dimension data

Table 3 Preview

Plot ID

Number of Tree

Tree Height (m)

Crown Radius (m)

Min

Mean

Max

Min

Mean

Max

1

14

12,00

21,32

37,00

1,85

3,42

7,80

2

22

11,30

24,34

40,00

1,10

3,40

5,36

3

29

11,50

19,16

44,00

0,80

3,35

5,14

4

27

13,50

21,13

31,00

1,34

3,35

6,14

5

20

11,70

19,80

38,00

1,20

2,04

3,75

6

6

3,00

12,50

22,50

2,16

3,06

4,51

7

24

2,50

13,85

24,00

1,05

2,11

4,11

8

8

6,00

13,19

25,00

0,56

2,27

5,11

9

21

5,00

14,25

36,00

0,90

2,05

4,15

10

13

7,00

20,38

35,00

1,53

4,02

6,53

11

18

13,00

20,33

31,00

1,25

2,60

5,25

12

20

8,70

18,27

29,00

1,33

2,98

5,23

13

25

10,00

19,99

33,00

1,33

3,36

8,09

14

12

7,00

19,66

33,50

2,13

3,71

4,81

15

29

7,00

17,45

28,30

1,53

2,05

3,55

16

32

8,00

14,61

22,00

0,96

3,12

5,37

17

30

6,00

17,53

28,00

1,00

3,02

5,75

18

26

8,00

17,63

28,00

1,12

2,95

5,10

19

11

5,00

12,27

27,00

1,53

2,68

3,50

20

15

16,00

21,65

35,00

2,44

2,55

6,06

21

24

9,00

18,27

31,00

2,15

2,92

5,00

22

24

9,00

18,67

36,00

1,10

2,90

10,18

23

19

10,00

17,96

28,00

1,98

4,04

6,98

24

29

4,00

18,01

31,00

1,64

3,29

8,01

25

20

5,50

20,75

38,00

1,50

3,69

8,75

26

18

10,00

21,78

32,00

1,37

4,73

6,86

27

31

7,00

19,13

32,00

0,99

3,46

6,99

28

21

10,00

15,72

27,00

1,23

2,27

3,35

29

11

7,17

19,73

32,36

1,53

2,35

5,17

30

15

15,58

21,15

37,71

2,03

2,25

7,11

 


UAV-based RGB imagery was acquired using a DJI Mavic 3 Enterprise drone, equipped with a D-RTK 2 Mobile Station for enhanced positioning accuracy. A total of 190 high-resolution RGB images were successfully captured during the flight missions. The UAV image acquisition parameters are summarized in Table 2.

Agisoft Metashape Professional version 2.1.0 software was used to create point cloud data from RGB images obtained by UAVs.  Because of variations in vegetation structure and image overlap, the resulting point cloud densities differed amongst the field measurement plots.  Table 3 displays the specific point cloud density values for every plot.


 

Table 2. Parameters of UAV-RGB images

Table 4 Preview

Parameter

Spesification

Type drone

DJI Mavic 3 Enterprise + D-RTK 2 Mobile Station

Speed of Drone

8–10 m/s

High of Flight

200 mdpl

Flight sidelap

70 %

Flight overlap

80%

GSD

9,23 cm/pixel

 

Table 3. Point cloud density and maximum height after normalization

Table 5 Preview

Plot ID

Tree Density (Tree/ha)

Point Cloud Density (pts/m2)

Max Height (m)

1

350

727,11

35,41

2

550

657,77

36,09

3

725

693,60

25,91

4

675

587,65

17,71

5

500

533,02

35,52

6

150

535,96

50,77

7

600

572,08

24,62

8

200

653,49

26,35

9

525

689,33

40,72

10

325

727,14

36,93

11

450

509,85

26,45

12

500

503,27

25,08

13

625

646,27

25,71

14

300

652,35

28,17

15

725

703,87

26,06

16

800

569,64

57,43

17

750

685,38

27,19

18

650

575,67

31,59

19

275

577,39

50,65

20

375

753,06

36,05

21

600

393,82

25,34

22

600

333,62

24,04

23

475

592,27

31,01

24

725

568,24

21,04

25

500

742,35

39,53

26

450

571,48

30,59

27

775

612,09

23,55

28

525

595,91

38,01

29

275

618,19

34,65

30

375

643,73

26,08

 

Figure 3. Point cloud after normalization: a) distribution of plots; b) vertical view of plot

 


3)          Individual Tree Segmentation

The individual tree segmentation in this study was conducted using the relative tree distance algorithm developed by Li et al. (2012). This algorithm was selected due to its demonstrated effectiveness in delineating overlapping tree crowns, thereby making it well suited for application in natural forests characterized by complex and heterogeneous crown structures. This point cloud–based algorithm segments tree crown areas by analyzing the relative distance and height of points within a normalized point cloud (Li et al., 2012; Liu et al., 2023). Point-cloud normalization is performed by subtracting the interpolated ground-point elevations derived using the K-Nearest Neighbors Inverse Distance Weighting (KNNIDW) method from the original point-cloud heights, thereby producing a dataset in which the ground surface is standardized to an elevation of zero. The segmentation technique iteratively divides crown borders according to vertical and horizontal proximity after first finding local maxima, which are thought to represent treetops (Marcello et al., 2024).  The lidar package was developed by Roussel et al. (2020), which offers specific features for managing and separating individual trees from point cloud data (Hardenbol et al., 2021), was used for all processing in R.  Figure 4 shows the process for segmenting specific trees.

The Li et al. (2012) function has several arguments in individual tree segmentation, including threshold numbers for tree space (dt1 and dt2), height of tree dominant (Zu), if point elevation is greater than Zu, dt2 is used, search radius (R), minimum height of tree (hmin), and maximum radius of crown (speed_up) (Roussel et al., 2020). In executing this function, we adjusted the tree parameters for each plot with these arguments, where dt1 = minimum tree space, dt2 = mean tree space, R = mean radius of crown, Zu = mean tree height, hmin = minimum tree height, speed_up = maximum radius of crown.


 

Figure 4. Flow Chart of Data Processing

 


3. Data Analysis

The accuracy of individual tree segmentation was evaluated by comparing UAV-RGB-derived tree data with field-measured reference trees. The assessment followed criteria outlined by Wallace et al. (2014), which define a match between segmented and reference trees based on three conditions: (i) the distance between a UAV-RGB-derived tree and a field tree must be less than 60% of the average tree spacing within the plot; (ii) the canopy of the UAV-RGB tree must overlap with the canopy of the field tree by at least 20%, calculated from the area of the intersection divided by the area of the field canopy; and (iii) if multiple canopies overlap with a single field tree, the closest match is determined using the 2D Euclidean distance.

Based on these matching criteria, segmentation accuracy was classified into three categories, as adapted from Irlan et al. (2020): (i) True Positive (TP) a tree correctly segmented in accordance with all matching rules; (ii) False Negative (FN) a field tree not correctly segmented or merged with neighboring trees; and (iii) False Positive (FP) a segmented tree that does not correspond to any actual field tree. A visual representation of this evaluation process is provided in Figure 5.

The recall (r), precision (p), and F-score values are computed to assess the accuracy of individual tree segmentation (Li et al., 2012; Mohan et al., 2021).  Precision evaluates the accuracy of tree segmentation across all detected trees, recall shows the percentage of true trees that are correctly segmented, and F-score calculates the overall accuracy by taking into account the r and p values of the segmented trees using the following formula (Ahmadi et al., 2022; Gan et al., 2023; Marcello et al., 2024; Yu et al., 2022):

Precision = TP / (TP + FP)                                    (1)

Recall = TP / (TP + FN)                                    (2)

F1-score = 2 × (Precision × Recall) / (Precision + Recall)                          (3)

 


Figure 5. Three Type of segmentation: (i) TP; (ii) FN; (iii) FP


The accuracy test of tree coordinates and crown radius used is the Root Mean Square Error (RMSE) (Irlan et al., 2020). This accuracy test is only carried out on trees that are correctly detected (TP). The equation used to calculate RMSE is as follows (Ahmadi et al., 2022; Gan et al., 2023; Yu et al., 2022; Zhou et al., 2023):

RMSE = √(Σ(ỳi - yi)² / n)     (4)

Where, ỳi is the coordinate and crown radius of the tree predicted, and yi is the coordinate and crown radius of the tree reference. n is the number of trees.

 

4.   Result

1)  Individual Tree Segmentation Result

Figure 6 shows the outcomes of segmenting individual trees using point clouds created from RGB photos.  The point cloud-based algorithm successfully segmented 683 trees in total.  The segmentation results show a propensity to overestimate the number of trees.  Small canopy gaps or understory elements may have been mistakenly classified as individual trees in plots with relatively low tree densities, such as Plot IDs 1, 6, and 8, where this overestimation was especially noticeable.


 

Figure 6. Number of tree reference and tree segmented each plot


The relationship between the number of reference (field-measured) trees and the number of segmented trees per plot is illustrated in Figure 7. A relatively strong positive correlation was observed between the two, with a correlation coefficient of r = 0,94 and determination coefficient of R2 = 0.89. While the total number of field-measured trees across all plots was 614, the point cloud–based segmentation algorithm identified 683 trees. This disparity implies that the approach has a tendency for over-segmentation, in which individual trees are inadvertently split up into several segments.  Fascinatingly, this result is in contrast to Li et al. (2012)'s findings, which showed that the same algorithm tended to yield under-segmentation, especially in dense forest stands.  The structural features of tropical natural forests or variations in UAV data resolution and point cloud density could be the cause of the over-segmentation seen in this study. Identifying and delineating tree crowns with precision in dense tropical forests is a difficult task (Aubry-Kientz et al., 2021).


Figure 7. Relationship between tree number of plot reference and segmented


2)  Individual Tree Segmentation Accuracy

The accuracy assessment of the individual tree segmentation method using UAV-RGB-derived point clouds demonstrated moderately strong performance (see Table 4). Out of the total number of field trees, 384 were correctly segmented (True Positives, TP). In contrast, 178 trees were missed or merged with neighboring trees (False Negatives, FN), and 121 trees were incorrectly identified as trees where none existed (False Positives, FP). The overall accuracy metrics yielded a recall (r) of 0.68, a precision (p) of 0.76, and an F-score (F) of 0.72, indicating a balanced trade-off between omission and commission errors.

Segmentation accuracy varied significantly at the plot level.  F-scores ranged from 0.18 to 0.82, recall values from 0.20 to 0.86, and precision values from 0.17 to 1.00.  This fluctuation most likely results from variations in point cloud quality, tree density, and local forest structure between plots.


Table 4. The result of accuracy assessment for individual tree segmentation

Table 11 Preview

Plot ID

Tree Reference

Tree Segmented

Accuracy

FN

FP

TP

Overall

r

p

F-score

1

14

3

13

7

23

0,70

0,35

0,47

2

22

6

1

15

22

0,71

0,94

0,81

3

29

10

3

17

30

0,63

0,85

0,72

4

27

11

2

15

28

0,58

0,88

0,70

5

20

6

9

12

27

0,67

0,57

0,62

6

6

4

5

1

10

0,20

0,17

0,18

7

24

5

7

11

23

0,69

0,61

0,65

8

8

3

2

6

11

0,67

0,75

0,71

9

21

6

6

9

21

0,60

0,60

0,60

10

13

7

4

8

19

0,53

0,67

0,59

11

18

6

4

10

20

0,63

0,71

0,67

12

20

4

5

17

26

0,81

0,77

0,79

13

25

6

3

18

27

0,75

0,86

0,80

14

12

4

4

10

18

0,71

0,71

0,71

15

29

5

7

18

30

0,78

0,72

0,75

16

32

11

3

18

32

0,62

0,86

0,72

17

30

9

5

17

31

0,65

0,77

0,71

18

26

5

4

18

27

0,78

0,82

0,80

19

11

2

5

6

13

0,75

0,55

0,63

20

15

3

2

11

16

0,79

0,85

0,81

21

24

6

5

15

26

0,71

0,75

0,73

22

24

7

1

16

24

0,70

0,94

0,80

23

19

9

1

12

22

0,57

0,92

0,71

24

29

8

1

20

29

0,71

0,95

0,82

25

20

9

0

14

23

0,61

1,00

0,76

26

18

6

0

13

19

0,68

1,00

0,81

27

31

9

1

21

31

0,70

0,95

0,81

28

21

2

9

12

23

0,86

0,57

0,69

29

11

4

3

8

15

0,67

0,73

0,70

30

15

2

6

9

17

0,82

0,60

0,69

Overall

614

178

121

384

683

0,68

0,76

0,72

 


3)  Evaluation of Tree Position

Tree position accuracy was evaluated using the 384 correctly segmented trees (True Positives). The results of this assessment are presented in Table 5. The overall Root Mean Square Error (RMSE) for tree position was 1.95 meters, with an average distance between the predicted and reference tree locations of 3.37 meters. At the plot level, tree position accuracy varied, with RMSE values ranging from 0.77 to 2.84 meters.

It is crucial to remember that several circumstances could have caused positional errors.  First, a baseline degree of uncertainty is introduced by the horizontal precision of the reference tree coordinates, which are determined using a GNSS receiver with a threshold of 0.5 meters (Pang et al., 2021).  Second, while the segmentation method determines tree positions based on the highest point in the point cloud, which usually corresponds to the top of the crown, reference tree coordinates were taken at the base of the tree trunk (Aubry-Kientz et al., 2021; Deng et al., 2024).  Systematic positional inconsistencies were probably caused by this methodological variation in identifying tree position crown apex versus trunk base (Li et al., 2012).


Table 5. Mean of tree space and RMSE of tree position

Table 12 Preview

Plot ID

Mean Space of Tree Reference (m)

RMSE of Tree Position (m)

1

2,58

2,18

2

3,43

1,68

3

3,16

1,57

4

2,86

1,80

5

3,49

1,88

6

1,98

0,77

7

2,34

1,35

8

3,94

2,16

9

2,99

1,70

10

4,37

1,74

11

3,25

1,97

12

3,65

2,03

13

3,19

1,86

14

4,89

1,97

15

3,43

1,62

16

2,69

1,67

17

2,91

1,80

18

2,78

2,05

19

4,33

2,44

20

3,94

2,60

21

3,06

1,70

22

3,43

2,76

23

2,75

1,85

24

2,91

1,86

25

3,45

2,26

26

3,94

2,65

27

3,10

1,73

28

3,31

1,62

29

5,30

2,84

30

3,73

2,36

Overall

3,37

1,95

 


4)  Evaluation of Tree Crown Radius

Table 6 displays the findings of the accuracy evaluation of the tree crown radius.  The calculated crown radius has an overall Root Mean Square Error (RMSE) of 1.59 meters.  Plots differed in precision; nevertheless, RMSE values varied from 0.85 to 2.77 meters.  This variation could be explained by the segmentation algorithm's propensity to draw crown borders closer to the lower parts of the canopy, which might not fully capture the spread of the crown.  In point cloud-based segmentation, this kind of underestimate is typical when point density is low or when overlapping canopy layers mask, the actual horizontal extent of individual crowns (Ma et al., 2021).


Table 6. RMSE of the tree crown radius in each plot

Table 13 Preview

Plot ID

Radius Canopy Tree Reference (m)

Radius Canopy Tree Segmented (m)

RMSE (m)

1

3,42

2,58

1,25

2

3,40

2,23

1,19

3

3,35

1,38

1,65

4

3,35

1,38

2,29

5

2,04

1,93

1,24

6

3,06

4,00

0,94

7

2,11

1,80

1,03

8

2,27

2,29

1,36

9

2,05

2,66

1,11

10

4,02

2,73

2,30

11

2,60

3,09

1,10

12

2,98

2,35

1,34

13

3,36

2,30

2,14

14

3,71

2,69

1,36

15

2,05

2,17

0,85

16

3,12

2,16

1,39

17

3,02

2,36

1,69

18

2,95

2,13

1,75

19

2,68

2,79

1,11

20

2,55

2,96

1,77

21

2,92

2,47

1,41

22

2,90

1,70

1,72

23

4,04

2,55

2,14

24

3,29

2,46

1,98

25

3,69

2,48

2,62

26

4,73

2,66

2,77

27

3,46

2,04

2,32

28

2,27

2,45

0,85

29

2,35

2,62

1,40

30

2,25

3,01

1,72

Overall

3,00

2,41

1,59

 


5.    Discussion

1)  The Limitation of Point Cloud Data

Each point in the generated point cloud corresponds to a pixel derived from RGB images. These point clouds were automatically classified into several surface categories, including soil, low vegetation, high vegetation, and road surface (Pacheco-Prado et al., 2025). However, a key limitation of RGB sensors is their inability to penetrate dense canopies, resulting in insufficient point data beneath the upper canopy layer (González-Jaramillo et al., 2019; Hanapi et al., 2019; You et al., 2023). This limitation is particularly critical for point cloud normalization, which relies on accurately identifying ground points to calculate tree height and perform segmentation. According to Dell et al. (2019), the accuracy of individual tree segmentation findings can be significantly impacted by the absence of trustworthy ground points.  It is possible to classify ground points efficiently in comparatively open spaces. On the other hand, ground point categorization becomes more challenging or even impossible in plots with dense or closed canopies. This is evident in Figures 8a–8c, where ground points are visible only in sparsely vegetated areas, while Figure 8d shows no ground classification at all due to complete canopy closure. What can be done to overcome the limitations of ground points in high-density forests is to combine them with point clouds obtained from above the canopy and terrestrial point clouds (Xia et al., 2023).


 

Figure 8. Classification of point cloud in: a) low density; b) moderate density; c) high density; and d) very high density


2)  Point Cloud Normalization Issue

One of the critical components in point cloud–based tree segmentation is the process of point cloud normalization. Normalized point clouds serve as the primary input for individual tree segmentation algorithms, as they provide accurate height information relative to the ground surface (Marcello et al., 2024). The purpose of normalization is to remove terrain elevation variability, allowing for the extraction of tree heights above ground level rather than above sea level or uneven topography (Dell et al., 2019; Wu et al., 2024).

In environments with extremely changeable or rough terrain, like the research study region, this procedure is crucial as a mention Ma et al. (2023).  Errors in tree detection and crown delineation might result from segmentation algorithms misinterpreting elevation changes in the terrain as tree height if correct normalization is not used.  Consequently, successful point cloud normalization and, eventually, high-accuracy individual tree segmentation depend on precise and trustworthy ground point categorization. Point cloud normalization was conducted individually for each of the 30 observation plots. To create normalized point clouds, this study used the K-Nearest Neighbors (KNN) interpolation technique in conjunction with Inverse Distance Weighting (IDW).  A three-tier classification system was used to visually and qualitatively evaluate the normalizing output: "Yes" denotes a successful normalization, "Maybe" denotes an ambiguous outcome, and "No" denotes an unsuccessful normalization.  According to the assessment results, eight plots failed to normalize correctly, four plots produced results that were unclear, and eighteen plots were successfully normalized.

As discussed in the previous section, the availability of reliable ground points is a key determinant of successful point cloud normalization (Dell et al., 2019). In many of the failed or uncertain plots, incomplete or missing ground returns caused by dense canopy cover were the primary limiting factor. Moreover, the absence of high-resolution Digital Terrain Model (DTM) data for the study area posed an additional constraint, preventing the use of alternative or more robust normalization techniques.

Given these limitations, we recommend further research focused on evaluating the performance of RGB-derived point cloud normalization methods, particularly in structurally complex tropical forest environments. Such studies are essential to understand better the constraints and potential improvements in normalization accuracy under varying forest conditions.

 

3)  The Limitation of Algoritm and Parameter Sensitivity

The results of the individual tree segmentation indicate that the applied algorithm tends to over-segmentation, leading to an overestimation of tree counts. This issue is particularly evident in trees with large, irregularly shaped crowns, which are often misinterpreted by the algorithm as multiple smaller trees (Figure 9a). As a result, a single tree may be incorrectly segmented into multiple crown regions.

Additionally, point cloud height is not considered as a limitation during the crown delineation procedure in the current segmentation approach. As a result, low-lying shrubs or understory plants might be mistakenly categorized as belonging to the top canopy, which would further increase segmentation mistakes (Figure 9b). The algorithm's absence of a minimum crown height criterion, which would have assisted in differentiating between actual tree crowns and lower vegetative layers, is the leading cause of this (Roussel et al., 2020). Future enhancements may strengthen segmentation results and reduce the misclassification of undergrowth as tree crowns by utilizing vertical filtering criteria, such as minimum crown height or canopy height thresholds.


 

Figure 9. a) Horizontal structure, illustrated the over-segmentation of trees (red boxes); b) vertical structure, demonstrated the understory plants (<3 m) segmented as tree crowns

 


Tree height thresholds, relative tree spacing, and crown width estimates are among the parameter variables that have a significant impact on point cloud-based individual tree segmentation techniques.  The best way to set these settings can differ significantly depending on the type of forest, the density of trees, and the quality of the point cloud (Marcello et al., 2024).  Tree counts may be systematically overestimated or underestimated because of improper or non-contextual parameter settings.

In complex and structurally heterogeneous forests, segmentation algorithms often struggle to distinguish trees from dense undergrowth or may misclassify non-tree objects as individual trees, thereby reducing segmentation accuracy, as mentioned by Ma et al. (2022) and Marcello et al. (2024). In this study, the algorithm frequently segmented trees with small crown sizes, contributing to over-segmentation in some plots (see Figure 10). It is important to note that the segmentation process in this study was based on a maximum crown width parameter, without applying a minimum crown width threshold (Roussel et al., 2020). As a result, smaller vegetation or understory elements that met the maximum width criterion may have been incorrectly classified as tree crowns. Future algorithm improvements should include a minimum crown width parameter to enhance segmentation accuracy, particularly in tropical natural forests with intricate vertical structures. This would improve crown delineation and assist in eliminating non-tree vegetation, particularly in regions with highly stratified canopy layers.


 

Figure 10. The segmentation results for individual trees with a small crown (black boxes)


4)  Future Research

Individual tree segmentation utilizing RGB images-derived point cloud data is significantly hampered by high crown density and overlapping canopy layers.  The variety of tree species seen in tropical natural forests, each having unique branching patterns and crown morphologies, adds to the complexity. Furthermore, successful segmentation in dense, multi-layered forest environments is more challenging due to RGB point clouds' intrinsic limitations in collecting complex three-dimensional structures beneath the canopy.

Despite these challenges, the findings of this study demonstrate that RGB point cloud–based segmentation remains a feasible and cost-effective approach for individual tree detection in tropical forest conditions. To further enhance the performance and applicability of this method, future work will focus on the following directions: 1) Improving the quality of UAV-RGB point clouds by integrating them with terrestrial point cloud data to capture vertical canopy structures better. 2) Developing segmentation algorithms tailored to tropical forest environments by incorporating additional parameters, such as minimum crown width, to reduce misclassification of understory vegetation. 3) Estimating aboveground biomass at the individual tree level using UAV-RGB point clouds. 4) Integrating UAV point cloud data with satellite imagery to support large-scale biomass estimation across diverse tropical forest landscapes. These efforts aim to improve the precision and scalability of remote sensing techniques for forest inventory, carbon accounting, and sustainable forest management in complex tropical ecosystems.

 

6.   Conclusion

This study evaluated the accuracy of point cloud–based individual tree segmentation derived from RGB images in tropical natural forests of Indonesia. A total of 683 trees were successfully segmented, comprising 384 True Positives (TP), 178 False Negatives (FN), and 121 False Positives (FP). The segmentation results revealed a tendency toward over-segmentation, particularly in areas with lower tree density or complex crown structures. The primary limiting factor was incomplete or missing ground measurements due to dense canopy cover.

With a recall (r) of 0.68, precision (p) of 0.76, and F-score (F) of 0.72, the overall accuracy of individual tree segmentation demonstrated a moderately strong performance.  Additionally, evaluations of spatial accuracy were carried out for the trees that were appropriately segmented.  Tree location had an RMSE of 1.95 meters, while crown radius had an RMSE of 1.59 meters.  Given its affordability in comparison to LiDAR-based techniques, these results imply that UAV-RGB point cloud–based segmentation is a practical and effective method for identifying individual trees in tropical natural forests. Addressing the problems of understory misclassification and over-segmentation, however, still needs work.  Future research will concentrate on the following areas: 1) improving RGB point cloud quality, possibly by integrating terrestrial point cloud data; 2) creating better segmentation algorithms with parameters like minimum crown width; and 3) expanding applications to include biomass estimation and large-scale forest monitoring by fusing satellite remote sensing data with UAV point clouds.

 

7.   Author Contributions

The First author (I) collected data, processed data, wrote and edited the draft of the manuscript. UKA, as the second author, assisted in processing the data, wrote, and edited the manuscript. SHO assisted in collecting the data and advised on the research methodology. MLI assisted in collecting and processing data. ASJ assisted in collecting and processing data. TBA assisted in giving advice on research methodology, and the last author, CA in the gave advice on research methodology and analysis data.

 

8.   Conflict of Interest

The authors declare no conflict of interest.

 

9.   Acknowledgments

This research was carried out through the Bima Program through the 2025 and The authors would like to thank theMinistry of Higher Education, Science and Technology of the Republic of Indonesia for  funding   and   supporting this research.

 

10. References

Ahmadi, S. A., Ghorbanian, A., Golparvar, F., Mohammadzadeh, A., & Jamali, S. (2022). Individual tree detection from unmanned aerial vehicle (UAV) derived point cloud data in a mixed broadleaf forest using hierarchical graph approach. European Journal of Remote Sensing, 55(1), 520–539. https://doi.org/10.1080/22797254.2022.2129095

Almeida, A., Gonçalves, F., Silva, G., Mendonça, A., Gonzaga, M., Silva, J., Souza, R., Leite, I., Neves, K., Boeno, M., & Sousa, B. (2021). Individual tree detection and qualitative inventory of a eucalyptus sp. Stand using uav photogrammetry data. Remote Sensing, 13(18). https://doi.org/10.3390/rs13183655

Aubry-Kientz, M., Vincent, G., Weinstein, B., Laybros, A., Jackson, T., Ball, J., & Coomes, D. (2021). Multisensor Data Fusion for Improved Segmentation of Individual Tree Crowns in Dense Tropical Forests. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 3927–3936. https://doi.org/10.1109/jstars.2021.3069159

Ball, J. G. C., Hickman, S. H. M., Jackson, T. D., Koay, X. J., Hirst, J., Jay, W., Archer, M., Aubry-Kientz, M., Vincent, G., & Coomes, D. A. (2023). Accurate delineation of individual tree crowns in tropical forests from aerial RGB imagery using Mask R-CNN. Remote Sensing in Ecology and Conservation, 9(5), 641–655. https://doi.org/10.1002/rse2.332

Dainelli, R., Toscano, P., Di Gennaro, S. F., & Matese, A. (2021). Recent advances in unmanned aerial vehicles forest remote sensing—a systematic review. Part ii: Research applications. In Forests (Vol. 12, Issue 4). MDPI AG. https://doi.org/10.3390/f12040397

Dalponte, M., & Coomes, D. A. (2016). Tree-centric mapping of forest carbon density from airborne laser scanning and hyperspectral data. Methods in Ecology and Evolution, 7(10), 1236–1245. https://doi.org/10.1111/2041-210X.12575

Dell, M., Stone, C., Osborn, J., Glen, M., McCoull, C., Rimbawanto, A., Tjahyono, B., & Mohammed, C. (2019). Detection of necrotic foliage in a young Eucalyptus pellita plantation using unmanned aerial vehicle RGB photography–a demonstration of concept. Australian Forestry, 82(2), 79–88. https://doi.org/10.1080/00049158.2019.1621588

Deng, S., Jing, S., & Zhao, H. (2024). A Hybrid Method for Individual Tree Detection in Broadleaf Forests Based on UAV-LiDAR Data and Multistage 3D Structure Analysis. Forests, 15(6), 1043. https://doi.org/10.3390/f15061043

Diez, Y., Kentsch, S., Fukuda, M., Caceres, M. L. L., Moritake, K., & Cabezas, M. (2021). Deep learning in forestry using uav-acquired rgb data: A practical review. In Remote Sensing, 13(14). https://doi.org/10.3390/rs13142837

Dos Santos, A. A., Marcato Junior, J., Araújo, M. S., Di Martini, D. R., Tetila, E. C., Siqueira, H. L., Aoki, C., Eltner, A., Matsubara, E. T., Pistori, H., Feitosa, R. Q., Liesenberg, V., & Gonçalves, W. N. (2019). Assessment of CNN-based methods for individual tree detection on images captured by RGB cameras attached to UAVS. Sensors (Switzerland), 19(16). https://doi.org/10.3390/s19163595

Gan, Y., Wang, Q., & Iio, A. (2023). Tree Crown Detection and Delineation in a Temperate Deciduous Forest from UAV RGB Imagery Using Deep Learning Approaches: Effects of Spatial Resolution and Species Characteristics. Remote Sensing, 15(3). https://doi.org/10.3390/rs15030778

Goldbergs, G., Maier, S. W., Levick, S. R., & Edwards, A. (2018). Efficiency of individual tree detection approaches based on light-weight and low-cost UAS imagery in Australian Savannas. Remote Sensing, 10(2), 161. https://doi.org/10.3390/rs10020161

González-Jaramillo, V., Fries, A., & Bendix, J. (2019). AGB estimation in a tropical mountain forest (TMF) by means of RGB and multispectral images using an unmanned aerial vehicle (UAV). Remote Sensing, 11(12). https://doi.org/10.3390/rs11121413

Guimarães, N., Pádua, L., Marques, P., Silva, N., Peres, E., & Sousa, J. J. (2020). Forestry remote sensing from unmanned aerial vehicles: A review focusing on the data, processing and potentialities. In Remote Sensing, 12(6). https://doi.org/10.3390/rs12061046

Hanapi, S. N. H. S., Shukor, S. A. A., & Johari, J. (2019). A review on remote sensing-based method for tree detection and delineation. IOP Conference Series: Materials Science and Engineering, 705(1), 012024. https://doi.org/10.1088/1757-899X/705/1/012024

Hardenbol, A. A., Kuzmin, A., Korhonen, L., Korpelainen, P., Kumpula, T., Maltamo, M., & Kouki, J. (2021). Detection of aspen in conifer-dominated boreal forests with seasonal multispectral drone image point clouds. Silva Fennica, 55(4). https://doi.org/10.14214/sf.10515

Irlan, Saleh, M. B., Prasetyo, L. B., & Setiawan, Y. (2020). Evaluation of tree detection and segmentation algorithms in peat swamp forest based on LiDAR point clouds data. Jurnal Manajemen Hutan Tropika, 26(2), 123–132. https://doi.org/10.7226/JTFM.26.2.123

Irundu, D., Idris, A. I., Sudiatmoko, P., & Irlan. (2023). Biomassa Dan Karbon Tersimpan Diatas Tanah Pada Hutan Rakyat Agroforestri Di Kecamatan Bulo Kabupaten Polman. Jurnal Hutan Dan Masyarakat, 32–41. https://doi.org/10.24259/jhm.v15i1.26365

Li, W., Guo, Q., Jakubowski, M. K., & Kelly, M. (2012). A new method for segmenting individual trees from the lidar point cloud. Photogrammetric Engineering & Remote Sensing, 78(1), 75–84. https://doi.org/10.14358/PERS.78.1.75

Li, Y., Chai, G., Wang, Y., Lei, L., & Zhang, X. (2022). ACE RCNN: An Attention Complementary and Edge DetectionBased Instance Segmentation Algorithm for Individual Tree Species Identification Using UAV RGB Images and LiDAR Data. Remote Sensing, 14(13). https://doi.org/10.3390/rs14133035

Liu, Y., You, H., Tang, X., You, Q., Huang, Y., & Chen, J. (2023). Study on Individual Tree Segmentation of Different Tree Species Using Different Segmentation Algorithms Based on 3D UAV Data. Forests, 14(7). https:/ /doi.org/10.3390/f14071327

Ma, K., Jiang, F., Sun, H., Chen, S., & Xiong, Y. (2021). A Novel Vegetation Point Cloud Density Tree-Segmentation Model for Overlapping Crowns Using UAV LiDAR. Remote Sensing, 13(8), 1442. https://doi.org/10.3390/rs13081442

Ma, K., Chen, Z., Fu, L., Tian, W., Jiang, F., Yi, J., Du, Z., & Sun, H. (2022). Performance and sensitivity of individual tree segmentation methods for UAV-LiDAR in multiple forest types. Remote Sensing, 14(2), 298. https://doi.org/10.3390/rs14020298

Ma, K., Li, C., Jiang, F., Xu, L., Yi, J., Huang, H., & Sun, H. (2023). Improvement of treetop displacement detection by UAV-LiDAR point cloud normalization: a novel method and a case study. Drones, 7(4), 262. https://doi.org/10.3390/drones7040262

Marcello, J., Spínola, M., Albors, L., Marqués, F., Rodríguez-Esparragón, D., & Eugenio, F. (2024). Performance of individual tree segmentation algorithms in forest ecosystems using UAV LiDAR data. Drones, 8(12), 772. https://doi.org/10.3390/drones8120772

Mohan, M., Leite, R. V., Broadbent, E. N., Wan Mohd Jaafar, W. S., Srinivasan, S., Bajaj, S., Dalla Corte, A. P., Do Amaral, C. H., Gopan, G., Saad, S. N. M., Muhmad Kamarulzaman, A. M., Prata, G. A., Llewelyn, E., Johnson, D. J., Doaemo, W., Bohlman, S., Almeyda Zambrano, A. M., & Cardil, A. (2021). Individual tree detection using UAV-lidar and UAV-SfM data: A tutorial for beginners. Open Geosciences, 13(1), 1028–1039. https://doi.org/10.1515/geo-2020-0290

Nugroho, H. Y. S. H., Siarudin, M., Yuwati, T. W., Indrajaya, Y., Subarudi, S., Isnan, W., Putri, I. A. S. L. P., Setiawan, O., Nurfatriani, F., Baral, H., Gunawan, H., Sallata, M. K., Ansari, F., Muin, N., Prayudyaningsih, R., Ekawati, S., Allo, M. K., & Salminah, M. (2022). Mainstreaming Ecosystem Services from Indonesia’s Remaining Forests. Sustainability, 14(19), 12124. https://doi.org/10.3390/su141912124Pacheco-Prado, D., Bravo-López, E., Martínez, E., & Ruiz, L. Á. (2025). Urban Tree Species Identification Based on Crown RGB Point Clouds Using Random Forest and PointNet. Remote Sensing, 17(11), 1863. https://doi.org/10.3390/rs17111863

Pacheco-Prado, D., Bravo-López, E., Martínez, E., & Ruiz, L. Á. (2025). Urban Tree Species Identification Based on Crown RGB Point Clouds Using Random Forest and PointNet. Remote Sensing, 17(11), 1863. https://doi.org/10.3390/rs17111863

Pang, Y., Wang, W., Du, L., Zhang, Z., Liang, X., Li, Y., & Wang, Z. (2021). Nyström-based spectral clustering using airborne LiDAR point cloud data for individual tree segmentation. International Journal of Digital Earth, 14(10), 1452–1476. https://doi.org/10.1080/17538947.2021.1943018

Roussel, J. R., Auty, D., Coops, N. C., Tompalski, P., Goodbody, T. R. H., Meador, A. S., Bourdon, J. F., de Boissieu, F., & Achim, A. (2020). lidR: An R package for analysis of Airborne Laser Scanning (ALS) data. In Remote Sensing of Environment (Vol. 251). Elsevier Inc. https://doi.org/10.1016/j.rse.2020.112061

Saleh, M. B., Puspaningsih, N., Santi, N. A., Jaya, I. N. S., Tiryana, T., Irlan, & Rahaju, S. (2023). Perencanaan Hutan Untuk Pengelolaan Sumberdaya Hutan dan Lingkungan.

Schiefer, F., Kattenborn, T., Frick, A., Frey, J., Schall, P., Koch, B., & Schmidtlein, S. (2020). Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS Journal of Photogrammetry and Remote Sensing, 170, 205–215. https://doi.org/10.1016/j.isprsjprs.2020.10.015

Sun, Y., Hao, Z., Guo, Z., Liu, Z., & Huang, J. (2023). Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery. Remote Sensing, 15(20). https://doi.org/10.3390/rs15204923

Wallace, L., Lucieer, A., & Watson, C. S. (2014). Evaluating tree detection and segmentation routines on very high resolution UAV LiDAR. IEEE Transactions on Geoscience and Remote Sensing, 52(12), 7619–7628. https://doi.org/10.1109/TGRS.2014.2315649

Wu, J., Man, Q., Yang, X., Dong, P., Ma, X., Liu, C., & Han, C. (2024). Fine Classification of Urban Tree Species Based on UAV-Based RGB Imagery and LiDAR Data. Forests, 15(2), 390. https://doi.org/10.3390/f15020390

Xia, K., Li, C., Yang, Y., Deng, S., & Feng, H. (2023). Study on Single-Tree Extraction Method for Complex RGB Point Cloud Scenes. Remote Sensing, 15(10). https://doi.org/10.3390/rs15102644

Yang, K., Zhang, H., Wang, F., & Lai, R. (2022). Extraction of Broad-Leaved Tree Crown Based on UAV Visible Images and OBIA-RF Model: A Case Study for Chinese Olive Trees. Remote Sensing, 14(10). https://doi.org/10.3390/rs14102469

You, H., Tang, X., You, Q., Liu, Y., Chen, J., & Wang, F. (2023). Study on the differences between the extraction results of the structural parameters of individual trees for different tree species based on UAV LiDAR and high-resolution RGB images. Drones, 7(5), 317. https://doi.org/10.3390/drones7050317

Yu, K., Hao, Z., Post, C. J., Mikhailova, E. A., Lin, L., Zhao, G., Tian, S., & Liu, J. (2022). Comparison of Classical Methods and Mask R-CNN for Automatic Tree Detection and Mapping Using UAV Imagery. Remote Sensing, 14(2). https://doi.org/10.3390/rs14020295

Zhang, C., Zhou, J., Wang, H., Tan, T., Cui, M., Huang, Z., Wang, P., & Zhang, L. (2022). MultiSpecies Individual Tree Segmentation and Identification Based on Improved Mask RCNN and UAV Imagery in Mixed Forests. Remote Sensing, 14(4). https://doi.org/10.3390/rs14040874

Zhang, C., Song, C., Zaforemska, A., Zhang, J., Gaulton, R., Dai, W., & Xiao, W. (2024). Individual tree segmentation from UAS Lidar data based on hierarchical filtering and clustering. International Journal of Digital Earth, 17(1). https://doi.org/10.1080/17538947.2024.2356124

Zhou, X., Wang, H., Chen, C., Nagy, G., Jancso, T., & Huang, H. (2023). Detection of Growth Change of Young Forest Based on UAV RGB Images at Single-Tree Level. Forests, 14(1). https://doi.org/10.3390/f14010141