3D Mesh Model Exterior Salient Part Segmentation Using Prominent Feature Points and Marching Plane

  • Hong, Yiyu (Department of Copyright Protection, Sangmyung University) ;
  • Kim, Jongweon (Department of Electronics Engineering, Sangmyung University)
  • Received : 2017.09.04
  • Accepted : 2018.09.08
  • Published : 2019.03.31


In computer graphics, 3D mesh segmentation is a challenging research field. This paper presents a 3D mesh model segmentation algorithm that focuses on removing exterior salient parts from the original 3D mesh model based on prominent feature points and marching plane. To begin with, the proposed approach uses multi-dimensional scaling to extract prominent feature points that reside on the tips of each exterior salient part of a given mesh. Subsequently, a set of planes intersect the 3D mesh; one is the marching plane, which start marching from prominent feature points. Through the marching process, local cross sections between marching plane and 3D mesh are extracted, subsequently, its corresponding area are calculated to represent local volumes of the 3D mesh model. As the boundary region of an exterior salient part generally lies on the location at which the local volume suddenly changes greatly, we can simply cut this location with the marching plane to separate this part from the mesh. We evaluated our algorithm on the Princeton Segmentation Benchmark, and the evaluation results show that our algorithm works well for some categories.


1. Introduction

Decomposing a 3D mesh model into visually meaningful components is an efficient way to understand and analyse 3D mesh model. Various 3D mesh model processing applications can be benifited from effective 3D mesh model segmentation approaches.

With the rapid improvement of 3D technology, such as 3D printing, 3D scanning and 3D modeling [1-2], 3D models are applicable in many fields and easily accessible on internet. This wide range of applications has naturally led to copyright infringement [3-5] of 3D models. Additionally, it is helpful when the 3D model segmentation is applied to the preprocessing stage of 3D model identification. For example, someone can illegally distribute a superman 3D model by exchanging the head with iron-man's head. In this case, it is difficult to automatically identify iron-man's head from the model. However, if the illegally merged model is decomposed into meaningful parts in advance and if there is an identification method for each part, it could be much easier to identify the merged part, which is helpful for identifying illegally distributed 3D models on the internet.

In the recent past, some papers [7-9] proposed mesh segmentation algorithms that segment a 3D mesh by focusing on extracting a core part of the mesh, so that the other protusion parts are naturally seperated. In this paper, instead of extracting a core part of a 3D mesh model, we propose a 3D mesh model segmentation algorithm that removes exterior salient parts by using a set of planes that marching from prominent feature points, which are on the extreme protrusions of a mesh. As illustrated in Fig. 1, we call the plane which cut the mesh by a specific path as marching plane. In regard to a boundary region, there would be a large increase on the area of the cross section that is cut by the marching plane. We will show more details of the proposed algorithm in section 3 and experimental results will be shown in section 4.


Fig. 1. The red point represents one of the prominent feature points and the pink contour represents the contour of cross section that is cut by the marching plane

2. Related Work

In the past few decades, many algorithms [6-12] have been proposed to automatically segment 3D mesh model. Referring to surveys in [13, 14], there are two main types of the 3D mesh segmentation algorithms, surface-based and part-based. Surface-based algorithms generally segment a mesh by using geometric properties of the surface, such as the dihedral angles, geodesic distance and curvature. Part-based approaches usually decompose a 3D mesh model into visually meaningful parts based on cognitive science works [15, 16]. The authors in [15] state that minimum negative curvature is main property that effect human vision define segmentation boundary region. In addition [16], degree of protrusion, boundary strength and relative volume are also three factors that influence human perception to determine the salience of parts. We mainly use the relative volume factor, which is derived from cognition science works, to segment a mesh, making it worth briefly reviewing part-based 3D mesh model segmentation methods.

Lee [11] segment mesh by identifying first closed boundaries using the minimum negative curvature. Their major contribution is enclosing the open boundary around a deep concavity using the efficient shortest path algorithm. However, it can’t be reflects global shape of the object properly, because the surface curvature is inclined to local property.

Shapira [12] proposed shape diameter function (SDF) which is a volumetric function. The function calculates the average length of a range of cone shaped rays for each face of the mesh. From each centroid of the face, the cone shaped rays are sent to the opposite side of the mesh. Based on calculated SDF values of the mesh, k Gaussian functions are fit to produce hierarchical mesh segmentation.

Katz [7] convert original mesh vertices into a pose-invariant representation by applying multidimensional scaling, after that, core part of the mesh is extracted with spherical mirroring. In the subsequent research, others [8, 9] approximated core part by characterizing the salient parts of extreme points of the mesh. Valette [10] proposed a protrusion function to separates protrusion parts from the main body of the mesh, which measures the extent of closeness between a vertex and each protrusion part. Our mesh segmentation concept is a little similar to a previous report [10]. Our approach are also focus on partitioning exterior individual parts from the main body. Nevertheless, they did not fully utilized the cognition theory, which result the 3D model segmentation usually did not meet the perception of human. It can be visually measured in the experimental result section (Fig. 9).

3. Our segmentation approach

The proposed segmentation approach includes the following procedures to partitioning exterior individual parts from the main body:

1. For a given 3D mesh, coarse feature points that reside on tips of each salient parts are first extracted and filtered to remain only one prominent feature point on each salient part.

2. From each prominent feature point, a proper geodesic path on the 3D mesh model surface is found for the marching plane to follow ahead. Here we call the found geodesic path as marching path.

3. The 3D mesh is intersected by the marching plane as it follows the marching path, after that the area of the cross sections is calculated to represent the local volume.

4. Through the area calculation with the marching plane marching on the path, we determine a partitioning boundary of the salient part of the mesh by where the area has large changes.

5. Step2 to step4 are repeated till all the prominent feature points are processed. The fast marching method [17] is used to compute geodesic distances and paths in our approach.

3.1 Feature point extraction and filtering

Inspired by a previous report [7], coarse feature points are extracted by using multidimensional scaling (MDS) and its convex hull on a original simplified 3D mesh model. As shown in Fig. 2 (b), the salient parts (like the limbs and tails) of the 3D mesh model are intuitively “straightened” after MDS transform is applied on the original one. Afterward, the coarse feature points are extracted as they reside on the convex hull of the MDS transformed 3D mesh.

Then, these coarse feature points are filtered until only one prominent feature point remains on each salient part (Fig. 2 (c)). It can be performed by measuring whether the sum of the geodesic distance from the coarse feature point to other vertices on the mesh is the local maxima within a radius of the geodesic neighborhood [8]. We set the radius as \(\sqrt{0.005 \times \operatorname{area}(m e s h)} \times 4\)[18]. Eventually, the filtered feature points in the MDS transformed space are mapped back to the original 3D mesh (Fig. 2 (d)).


Fig. 2. (a) Original model, (b) Apply MDS and its convex-hull, (c) Filtered feature points, and (d) Mapped back to the original model

3.2 Marching path selection

To find a proper marching plane. First, we use the fast marching method to find the shortest path between a prominent feature point to the other prominent feature points, as demonstrated in Fig. 3 (a) and Fig. 3 (b). The two pictures show the five shortest paths from a prominent feature point (red dot) to the others (blue dots).

Next, the most stable and straightest geodesic path is selected as the marching path. Let’s first define a straight line as a line that connecting two ends points of a geodesic path. In our approach, a geodesic path’s straightness is calculated by measuring the average perpendicular distance of each point on the path to the straight line. We define as the straightness of a geodesic path, it is calculated with the following equation:


Fig. 3. (a), (b) Marching path candidates, (c) Selection of marching path demonstrated in 2D

\(D=\sum_{i=1}^{N_{p}} \frac{|\overline{m_{i} l} \times \bar{s}|}{|\bar{s}|} / N_{p}\)       (1)

Where \(m_i\) indicates the point on the path, indicates a point on the line that can be either of the two end points of the path, × denotes the cross products, ̅ is the directing vector of the line and \(N_p\) indicates the number of points on the marching path. A smaller indicates a straighter path.

3.3 Marching plane construction

Marching plane is constructed at each point on the marching path. As a unique plane can be determined by a point and a normal vector, we define the marching plane as MP(\(p_k,n_k\) ), where \(p_k\) denote the th point toward marching direction on a marching path and \(n_k\)denote a relative normal vector of the plane. Because the area of the cross section needs to represent local volume, the marching plane MP ( \(p_k,n_k\)) should intersect perpendicular to the local region of the mesh where \(p_k\) resides. The \(n_k\) is calculated by following formula which using neighbors of \(p_k\)on the marching path:

\(n_k=\left\{ \begin{array}{lc} \overline{\sum_{l=1}^{k} p_{k} \sum_{j=1}^{t} p_{k+j}}, &\quad 1<k \leq t\\ \overline{\sum_{l=1}^{t} p_{k-l} \sum_{j=1}^{t} p_{k+j}},& \quad t<k<N_{p}-t\\ \overline{\sum_{l=1}^{t} p_{k-l} \sum_{j=k+1}^{N_{p}} p_{j}},& \quad N_{p}-t \leq k<N_{p} \end{array} \right.\)       (9)

Where Np denotes the number of points on the marching path and denotes the number of neighbor points that are used to a construct normal vector. The t parameter can be adjusted by Np. In Fig. 4, the yellow planes represent the constructed marching plane, and the blue dots represent points on the marching path.


Fig. 4. Constructed marching plane


Fig. 5. (a) Crossing points and (b) Cross section

3.4 Cross section extraction

We extract the cross section by connecting each crossing point, which is created when marching plane intersect the 3D mesh. The problem to finding crossing point can be further transferred to the problem as identify a set of intersecting edges on the 3D mesh. The intersecting edges can be efficiently found by using the following two properties: First, when an edge intersects the plane, there must be another intersection on the other edge of the triangle face. Second, two adjacent faces share an edge in manifold mesh. After finding one intersecting edge, we can trace its adjacent faces to find other intersecting edges until it arrives at the first intersecting edge to form a closed contour. As shown in Fig. 5 (a), the red dot is a point on the marching path that is represented by a thick black line. Consider the point belong the marching plane and the edge also intersects the plane, we need to start tracing from this edge. The blue dots are crossing points that are intersected by the marching plane (yellow plane).

After extracting the crossing points, we transfer the 3D coordinates of these points into 2D and connect them one-by-one in traced order, as shown in Fig. 5 (b). Then, we can use the Shoelace formula [19] to compute its area.

3.5 Detection of boundary

In Fig. 6 (a), the blue dots are points on the marching path. For the points, we form planes to intersect the mesh and extract cross sections as mentioned above. From the picture, we can intuitively see the boundary region, which is the location with a significant change in the cross-section area.

To determine a boundary, we calculate a factor Rk for each cross section as follows:

\(R_{k}=\left\{\begin{array}{c} \frac{\left(\sum_{i=1}^{k} a_{i}\right) / k}{a_{k+1}-a_{k}},&1<k \leq t \\ \frac{\left(\sum_{i=0}^{t} a_{k-i}\right) /(t+1)}{a_{k+1}-a_{k}},&t<k<N_{p}-1 \end{array}\right.\)       (10)

Where \(a_k\)denotes the area of kth cross section, and \(R_k\) indicates the ratio of the average area of the former t cross sections and difference value of the area between the kth and later one. We have determined the boundary where the ratio \(R_k\) along the marching direction is first above a threshold \(T_c\)

Fig. 6 (b) is the area of the cross sections along the marching direction, and Fig. 6 (c) is the difference, while Fig. 6 (d) is used to calculate \(R_k\) . From the graph, we can see the 8th contour is the boundary of the salient part.


Fig. 6. (a) Detection of the boundary, (b) Area of the cross section along the marching direction, and (c) Difference of the area (d) \(R_k \)

3.6 Segmentation of the ring-like exterior part

We propose a method to partition the ring-like exterior salient part, such as with cup models for which core extraction algorithms [7-9] cannot produce good segmentation because they lack a salient core.

To extract the ring-like part, we need to add some sub-steps to extend step 4 of the methodology that we mentioned in the beginning of section 3.

Sub-step 1: After finding the boundary (Fig. 7 (a)), we delete the boundary faces from the mesh (Fig. 7 (b)).

Sub-step 2: Then, we repeat the fast marching algorithm to find whether there is another path to the prominent feature point that was selected in step 2.

Sub-step 3: If there is another path, we jump to step 3 (Fig. 7 (c)). Otherwise, we jump to step 5.

Fig. 7 (c) shows that there is another path after deleting the boundary faces. Along the path, we perform step 3 to step 4 to find the second boundary as well as delete relative faces. Finally, after repeating the fast marching algorithm, there is no other path out, as shown in Fig. 7 (d); as a result, we jump to step 5.


Fig. 7. Ring-like exterior parts partitioning process

4. Experimental Results

In this section, we first analyze our segmentation result that was performed on the Princeton Segmentation Benchmark [20]. The benchmark includes 380 3D meshes in 19 categories, and each category has 20 meshes. It also provides evaluation metrics and corresponding tools for the comparison of segmentation algorithms.

Our evaluation results, tested on the benchmark shown below, are obtained using fixed parameters. We obtain the number of points ( \(N_p\)) on a marching path by sampling points in a certain interval \(\sqrt{0.005 \times \operatorname{area}(\operatorname{mesh})} \times 0.25\) and set t = 3. The boundary detection threshold was \(T_c\)= 1. Through this experiment, we found that if the first area of the cross section has a large value, the prominent feature points usually lie on a place that lacks a salient part, such as the blue prominent feature point show in Fig. 7 (a). As a result, we do not operate the prominent feature point for which the ratio of the first cross section area and area of the mesh exceeds 0.03.

Segmentation algorithms measured in the benchmark, except [7, 12], need a number of segments as input, while ours do not. The benchmark provides four ways to set the number of segments, BySegmentation, ByModel, ByCategory and ByDataset. For more information about these four ways, please refer to [20].

In the benchmark, there are 19 object categories (e.g., human, cup, glasses, etc.). In Table 1, we compare the performance of our method with another seven algorithms for each object category. The entries of this table contain the algorithm rank according the Rand Index evaluation metric averaged over models of each object category; 1 is the best and 8 is the worst. Additionally, we used the ByCategory way for algorithms that require the number of segments as the input. From the table, we can see our algorithm works well on categories that are shown in red, such as plier, human, cup, etc.

Table 1. Rank of segmentation algorithms for each object category according to the Rand Index evaluation metric (1 is the best and 8 is the worst)


In Table 2, we also show the rank of the Rand Index evaluation of our method according to each different way of selecting the input number of segments. Our algorithm performs steadily well on categories shown in red, such as cup, glasses, airplane, plier and bird. We achieved rank 1 on the plier category.

Fig. 8 shows the Rand Index error averaged over all 380 mesh models by the four ways, and our method is marked as MarchingPlane. The bar color is meaningless for the methods of ShapeDiam, CoreExtra and ours because these algorithms do not require the number of segments as input. Our method is competitive compared with others measured in the ByDataset and ByCategory approaches.

In Fig. 9, our segmentation results were compared visually with [8-10], where the pictures are taken from [8]. Generally, the segmentation quality is similar between our study methods and those reported in [8-9]. Fig. 10 and Fig. 11 shows our segmentation could be invariant to pose and robust to noise, respectively, which are good properties for 3D mesh model segmentation. Fig. 12 shows the segmentation of cup models mentioned in section 3.6.

Table 2. Rank of the Rand Index error of our algorithm for four different methods of setting the target number of segments for algorithms that take it as input



Fig. 8. Rand Index error for 4 different methods of setting the target number of segments


Fig. 9. Segmentation result


Fig. 10. Invariant to pose


Fig. 11. Robust to noise


Fig. 12. Segmentation of cup models

5. Conclusion

In this paper, a 3D model segmentation algorithm that using the marching plane to remove exterior salient parts from 3D model is proposed. The proposed algorithm consists of the following several steps: (1) extracting prominent feature points, (2) identifying an appropriate marching path, (3) extracting the cross section by intersecting the marching plane with mesh along the marching path, (4) detecting boundaries by monitoring where a great variation occurs on the area of cross sections, (5) addressing ring-like component extraction.

From the Princeton Segmentation Benchmark evaluation results, our algorithm performs quite well for some models, such as pliers and glasses. Additionally, extended experiments show our method could be invariant to pose, robust to noise and competitive with previously published approaches. In the future, our effort will be constructing a better marching path, and improve the algorithm to be able to process hierarchical segmentation.


This work was supported by the Institute for Information & communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (No. 2015-0-00233, Managerial Technology Development and Digital Contents Security of 3D Printing based on Micro Licensing Technology)



Grant : Managerial Technology Development and Digital Contents Security of 3D Printing based on Micro Licensing Technology

Supported by : Institute for Information & communications Technology Promotion (IITP)


  1. U. Zabeeh, M. Imran, and S. K. Muhammad, "Analysis of 3D Face Modeling," International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 8, pp. 7-14, 2015.
  2. D. Hongwang, X. Wei, W. Haitao, Y. Bin and W. Zuwen, "Configuration Modeling and Experimental Verification with 3D Laser Scanning Technology for a Constrained Elastica Cable," International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 7, pp. 363-370, 2014.
  3. A. Harris, "The Effects of In-home 3D Printing on Product Liability Law," Journal of Science Policy and Governance, vol. 6, issue. 1, 2015.
  4. F.R. Ishengoma, A.B. Mtaho, "3D Printing Developing Countries Perspectives," International Journal of Computer Applications, vol. 104, pp. 30-34, 2014.
  5. D. Gupta, M. Tarlock, "3D Printing, Copyright Challenges, and the DMCA," New Matter, vol. 38, 2013.
  6. Y. Hong and J. Kim, "3D Mesh Model Segmentation Using Marching Plane," in Proc. of the 4th International Conference on Digital Contents and Applications, pp. 759-764, 2015.
  7. S. Katz, G. Leifman, and A. Tal, "Mesh segmentation using feature point and core extraction", The Visual Computer, vol. 21, pp. 649-658, 2005.
  8. A. Agathos, I. Pratikakis, S. Perantonis, and N. Sapidis, "Protrusion-oriented 3d mesh segmentation", The Visual Computer, vol. 26, pp. 63-81, 2010.
  9. H.-Y. S. Lin, H.-Y. Liao, and J.-C. Lin, "Visual salience-guided mesh decomposition", Multimedia, IEEE Transactions on, vol. 9, pp. 46-57, 2007.
  10. S. Valette, I. Kompatsiaris and M. G. Strintzis, "A polygonal mesh partitioning algorithm based on protrusion conquest for perceptual 3D shape description", Workshop towards Semantic Virtual Environments SVE 2005, pp. 68-76, 2005.
  11. Y. Lee, S. Lee, A. Shamir, D. Cohen-or, and H. P. Seidel, "Mesh scissoring with minima rule and part salience," Computer Aided Geometric Design, vol. 22, pp. 444-465, 2005.
  12. L. Shapira, A. Shamir, and D. Cohen-Or, "Consistent mesh partitioning and skeletonisation using the shape diameter function," The Visual Computer, vol. 24, pp. 249-259, 2008.
  13. A. Shamir, "A survey on mesh segmentation techniques," Computer Graphics Forum, vol. 27, pp. 1539-1556, 2008.
  14. M. Attene, S. Katz, M. Mortara, G. Patane, M. Spagnuolo, A. Tal, "Mesh Segmentation - a Comparative Study," SMI '06 Proceedings of the IEEE International Conference on Shape Modeling and Applications, pp. 7, 2006.
  15. D. D. Hoffman and W. Richards, "Parts of recognition," Cognition, vol. 18, pp. 65-96, 1984.
  16. D. D. Hoffman and M. Singh, "Salience of visual parts," Cognition, vol. 63, pp. 29-78, 1997.
  17. J. Sethian, R. Kimmel, "Computing geodesic paths on manifolds," Proc. of Natl. Acad. Sci. vol. 95, pp. 8431-8435, 1998.
  18. M. Hilaga, Y. Shinagawa, T. Kohmura, and T. L. Kunii, "Topology matching for fully automatic similarity estimation of 3d shapes", in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, pp. 203-212, 2001.
  19. Shoelace formula, in Wikipedia. Available from:
  20. X. Chen, A. Golovinskiy, and T. Funkhouser, "A benchmark for 3D mesh segmentation," ACM Transactions on Graphics, vol. 28, pp. 1-12, 2009.