• Title/Summary/Keyword: 3-D Segmentation

Search Result 451, Processing Time 0.027 seconds

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

A Study on the automatic vehicle monitoring system based on computer vision technology (컴퓨터 비전 기술을 기반으로 한 자동 차량 감시 시스템 연구)

  • Cheong, Ha-Young;Choi, Chong-Hwan;Choi, Young-Gyu;Kim, Hyon-Yul;Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.2
    • /
    • pp.133-140
    • /
    • 2017
  • In this paper, we has proposed an automatic vehicle monitoring system based on computer vision technology. The real-time display system has displayed a system that can be performed in automatic monitoring and control while meeting the essential requirements of ITS. Another advantage has that for a powerful vehicle tracking, the main obstacle handing system, which has the shadow tracking of moving objects. In order to obtain all kinds of information from the tracked vehicle image, the vehicle must be clearly displayed on the surveillance screen. Over time, it's necessary to precisely control the vehicle, and a three-dimensional model-based approach has been also necessary. In general, each type of vehicle has represented by the skeleton of the object or wire frame model, and the trajectory of the vehicle can be measured with high precision in a 3D-based manner even if the system has not running in real time. In this paper, we has applied on segmentation method to vehicle, background, and shadow. The validity of the low level vehicle control tracker was also detected through speed tracking of the speeding car. In conclusion, we intended to improve the improved tracking method in the tracking control system and to develop the highway monitoring and control system.

Segmentation of Target Objects Based on Feature Clustering in Stereoscopic Images (입체영상에서 특징의 군집화를 통한 대상객체 분할)

  • Jang, Seok-Woo;Choi, Hyun-Jun;Huh, Moon-Haeng
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.10
    • /
    • pp.4807-4813
    • /
    • 2012
  • Since the existing methods of segmenting target objects from various images mainly use 2-dimensional features, they have several constraints due to the shortage of 3-dimensional information. In this paper, we therefore propose a new method of accurately segmenting target objects from three dimensional stereoscopic images using 2D and 3D feature clustering. The suggested method first estimates depth features from stereo images by using a stereo matching technique, which represent the distance between a camera and an object from left and right images. It then eliminates background areas and detects foreground areas, namely, target objects by effectively clustering depth and color features. To verify the performance of the proposed method, we have applied our approach to various stereoscopic images and found that it can accurately detect target objects compared to other existing 2-dimensional methods.

Automatic Building Extraction Using LIDAR and Aerial Image (LIDAR 데이터와 수치항공사진을 이용한 건물 자동추출)

  • Jeong, Jae-Wook;Jang, Hwi-Jeong;Kim, Yu-Seok;Cho, Woo-Sug
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.13 no.3 s.33
    • /
    • pp.59-67
    • /
    • 2005
  • Building information is primary source in many applications such as mapping, telecommunication, car navigation and virtual city modeling. While aerial CCD images which are captured by passive sensor(digital camera) provide horizontal positioning in high accuracy, it is far difficult to process them in automatic fashion due to their inherent properties such as perspective projection and occlusion. On the other hand, LIDAR system offers 3D information about each surface rapidly and accurately in the form of irregularly distributed point clouds. Contrary to the optical images, it is much difficult to obtain semantic information such as building boundary and object segmentation. Photogrammetry and LIDAR have their own major advantages and drawbacks for reconstructing earth surfaces. The purpose of this investigation is to automatically obtain spatial information of 3D buildings by fusing LIDAR data with aerial CCD image. The experimental results show that most of the complex buildings are efficiently extracted by the proposed method and signalize that fusing LIDAR data and aerial CCD image improves feasibility of the automatic detection and extraction of buildings in automatic fashion.

  • PDF

Development of Computer Aided 3D Model From Computed Tomography Images and its Finite Element Analysis for Lumbar Interbody Fusion with Instrumentation

  • Deoghare, Ashish;Padole, Pramod
    • International Journal of CAD/CAM
    • /
    • v.9 no.1
    • /
    • pp.121-128
    • /
    • 2010
  • The purpose of this study is to clarify the mechanical behavior of human lumbar vertebrae (L3/L4) with and without fusion bone under physiological axial compression. The author has developed the program code to build the patient specific three-dimensional geometric model from the computed tomography (CT) images. The developed three-dimensional model provides the necessary information to the physicians and surgeons to visually interact with the model and if needed, plan the way of surgery in advance. The processed data of the model is versatile and compatible with the commercial computer aided design (CAD), finite element analysis (FEA) software and rapid prototyping technology. The actual physical model is manufactured using rapid prototyping technique to confirm the executable competence of the processed data from the developed program code. The patient specific model of L3/L4 vertebrae is analyzed under compressive loading condition by the FEA approach. By varying the spacer position and fusion bone with and without pedicle instrumentation, simulations were carried out to find the increasing axial stiffness so as to ensure the success of fusion technique. The finding was helpful in positioning the fusion bone graft and to predict the mechanical stress and deformation of body organ indicating the critical section.

3D micro-CT analysis of void formations and push-out bonding strength of resin cements used for fiber post cementation

  • Uzun, Ismail Hakki;Malkoc, Meral Arslan;Keles, Ali;Ogreten, Ayse Tuba
    • The Journal of Advanced Prosthodontics
    • /
    • v.8 no.2
    • /
    • pp.101-109
    • /
    • 2016
  • PURPOSE. To investigate the void parameters within the resin cements used for fiber post cementation by micro-CT (${\mu}CT$) and regional push-out bonding strength. MATERIALS AND METHODS. Twenty-one, single and round shaped roots were enlarged with a low-speed drill following by endodontic treatment. The roots were divided into three groups (n=7) and fiber posts were cemented with Maxcem Elite, Multilink N and Superbond C&B resin cements. Specimens were scanned using ${\mu}CT$ scanner at resolution of $13.7{\mu}m$. The number, area, and volume of voids between dentin and post were evaluated. A method of analysis based on the post segmentation was used, and coronal, middle and apical thirds considered separately. After the ${\mu}CT$ analysis, roots were embedded in epoxy resin and sectioned into 2 mm thick slices (63 sections in total). Push-out testing was performed with universal testing device at 0.5 mm/min cross-head speed. Data were analyzed with Kruskal-Wallis and Mann-Whitney U tests (${\alpha}=.05$). RESULTS. Overall, significant differences between the resin cements and the post level were observed in the void number, area, and volume (P<.05). Super-Bond C&B showed the most void formation ($44.86{\pm}22.71$). Multilink N showed the least void surface ($3.51{\pm}2.24mm^2$) and volume ($0.01{\pm}0.01mm^3$). Regional push-out bond strength of the cements was not different (P>.05). CONCLUSION. ${\mu}CT$ proved to be a powerful non-destructive 3D analysis tool for visualizing the void parameters. Multilink N had the lowest void parameters. When efficiency of all cements was evaluated, direct relationship between the post region and push-out bonding strength was not observed.

Matching for the Elbow Cylinder Shape in the Point Cloud Using the PCA (주성분 분석을 통한 포인트 클라우드 굽은 실린더 형태 매칭)

  • Jin, YoungHoon
    • Journal of KIISE
    • /
    • v.44 no.4
    • /
    • pp.392-398
    • /
    • 2017
  • The point-cloud representation of an object is performed by scanning a space through a laser scanner that is extracting a set of points, and the points are then integrated into the same coordinate system through a registration. The set of the completed registration-integrated point clouds is classified into meaningful regions, shapes, and noises through a mathematical analysis. In this paper, the aim is the matching of a curved area like a cylinder shape in 3D point-cloud data. The matching procedure is the attainment of the center and radius data through the extraction of the cylinder-shape candidates from the sphere that is fitted through the RANdom Sample Consensus (RANSAC) in the point cloud, and completion requires the matching of the curved region with the Catmull-Rom spline from the extracted center-point data using the Principal Component Analysis (PCA). Not only is the proposed method expected to derive a fast estimation result via linear and curved cylinder estimations after a center-axis estimation without constraint and segmentation, but it should also increase the work efficiency of reverse engineering.

Three-dimensional analysis of soft and hard tissue changes after mandibular setback surgery in skeletal Class III patients (골격성 3급 부정교합 환자의 하악골 후퇴술 시행후 안모변화에 대한 3차원적 연구)

  • Park, Jae-Woo;Kim, Nam-Kug;Kim, Myung-Jin;Chang, Young-Il
    • The korean journal of orthodontics
    • /
    • v.35 no.4 s.111
    • /
    • pp.320-329
    • /
    • 2005
  • The three-dimensional (3D) changes of bone, soft tissue and the ratio of soft tissue to bony movement was investigated in 8 skeletal Class III patients treated by mandibular setback surgery. CT scans of each patient at pre- and post-operative states were taken. Each scan was segmented by a threshold value and registered to a universal three-dimensional coordinate system, consisting of an FH plane, a mid-sagittal plane, and a coronal plane defined by PNS. In the study, the grid parallel to the coronal plane was proposed for the comparison of the changes. The bone or soft tissue was intersected by the projected line from each point on the gird. The coordinate values of intersected point were measured and compared between the pre- and post-operative models. The facial surface changes after setback surgery occurred not only in the mandible, but also in the mouth corner region. The soft tissue changes of the mandibular area were measured relatively by the proportional ratios to the bone changes. The ratios at the mid-sagittal plane were $77\~102\%(p<0.05)$. The ratios at all other sagittal planes had similar patterns to the mid-sagittal plane, but with decreased values. And, the changes in the maxillary region were calculated as a ratio, relative to the movement of a point representing a mandibular movement. When B point was used as a representative point, the ratios were $14\~29\%$, and when Pog was used, the ratios were $17\~37\%(9<0.05)$. In case of the 83rd point of the grid, the ratios were $11\~22\%(p<0.05)$.

A Study on the YCbCr Color Model and the Rough Set for a Robust Face Detection Algorithm (강건한 얼굴 검출 알고리즘을 위한 YCbCr 컬러 모델과 러프 집합 연구)

  • Byun, Oh-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.7
    • /
    • pp.117-125
    • /
    • 2011
  • In this paper, it was segmented the face color distribution using YCbCr color model, which is one of the feature-based methods, and preprocessing stage was to be insensitive to the sensitivity for light which is one of the disadvantages for the feature-based methods by the quantization. In addition, it has raised the accuracy of image synthesis with characteristics which is selected the object of the most same image as the shape of pattern using rough set. In this paper, the detection rates of the proposed face detection algorithm was confirmed to be better about 2~3% than the conventional algorithms regardless of the size and direction on the various faces by simulation.

Collision Avoidance Sensor System for Mobile Crane (전지형 크레인의 인양물 충돌방지를 위한 환경탐지 센서 시스템 개발)

  • Kim, Ji-Chul;Kim, Young Jea;Kim, Mingeuk;Lee, Hanmin
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.62-69
    • /
    • 2022
  • Construction machinery is exposed to accidents such as collisions, narrowness, and overturns during operation. In particular, mobile crane is operated only with the driver's vision and limited information of the assistant worker. Thus, there is a high risk of an accident. Recently, some collision avoidance device using sensors such as cameras and LiDAR have been applied. However, they are still insufficient to prevent collisions in the omnidirectional 3D space. In this study, a rotating LiDAR device was developed and applied to a 250-ton crane to obtain a full-space point cloud. An algorithm that could provide distance information and safety status to the driver was developed. Also, deep-learning segmentation algorithm was used to classify human-worker. The developed device could recognize obstacles within 100m of a 360-degree range. In the experiment, a safety distance was calculated with an error of 10.3cm at 30m to give the operator an accurate distance and collision alarm.