• 제목/요약/키워드: Detect3D

검색결과 824건 처리시간 0.034초

VRML을 이용한 3차원 Brain-endoscopy와 2차원 단면 영상 (3D Brain-Endoscopy Using VRML and 2D CT images)

  • 김동욱;안진영;이동혁;김남국;김종효;민병구
    • 대한의용생체공학회:학술대회논문집
    • /
    • 대한의용생체공학회 1998년도 추계학술대회
    • /
    • pp.285-286
    • /
    • 1998
  • Virtual Brain-endoscopy is an effective method to detect lesion in brain. Brain is the most part of the human and is not easy part to operate so that reconstructing in 3D may be very helpful to doctors. In this paper, it is suggested that to increase the reliability, method of matching 3D object with the 2D CT slice. 3D Brain-endoscopy is reconstructed with 35 slices of 2D CT images. There is a plate in 3D brain-endoscopy so as to drag upward or downward to match the relevant 2D CT image. Relevant CT image guides the user to recognize the exact part he or she is investigating. VRML Script is used to make the change in images and PlaneSensor node is used to transmit the y coordinate value with the CT image. The result is test on the PC which has the following spec. 400MHz Clock-speed, 512MB ram, and FireGL 3000 3D accelerator is set up. The VRML file size is 3.83MB. There was no delay in controlling the 3D world and no collision in changing the CT images. This brain-endoscopy can be also put to practical use on medical education through internet.

  • PDF

Feature Detection and Simplification of 3D Face Data with Facial Expressions

  • Kim, Yong-Guk;Kim, Hyeon-Joong;Choi, In-Ho;Kim, Jin-Seo;Choi, Soo-Mi
    • ETRI Journal
    • /
    • 제34권5호
    • /
    • pp.791-794
    • /
    • 2012
  • We propose an efficient framework to realistically render 3D faces with a reduced set of points. First, a robust active appearance model is presented to detect facial features in the projected faces under different illumination conditions. Then, an adaptive simplification of 3D faces is proposed to reduce the number of points, yet preserve the detected facial features. Finally, the point model is rendered directly, without such additional processing as parameterization of skin texture. This fully automatic framework is very effective in rendering massive facial data on mobile devices.

운전자 눈 위치를 이용한 사이드미러와 룸미러 자동조절시스템 (Automatic Side Mirror and Room Mirror Adjustment System using 3D Location of Driver′s Eyes)

  • 노광현;박기현;한민홍
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.7-7
    • /
    • 2000
  • This paper describes a mirror control system that can adjust the location of side and room mirror of the vehicle automatically using 3D coordinates to monitor the location of driver's eyes. Through analysis of the image inputted by two B/W CCD camera and infrared lamps installed on top of the driver's dashboard, we can estimate the values of 3D coordinate of the driver's eyes. Using these values, this system can determine the absolute position of each mirror and activate each actuator to the appropriate position. The stereo vision system can detect the driver's eyes whether it is day or night by virtue of infrared Lamps. We have tested this system using 10 drivers who drive a car currently, and most of the drivers were satisfied with the convenience of this system.

  • PDF

Fusion of LIDAR Data and Aerial Images for Building Reconstruction

  • Chen, Liang-Chien;Lai, Yen-Chung;Rau, Jiann-Yeou
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.773-775
    • /
    • 2003
  • From the view point of data fusion, we integrate LIDAR data and digital aerial images to perform 3D building modeling in this study. The proposed scheme comprises two major parts: (1) building block extraction and (2) building model reconstruction. In the first step, height differences are analyzed to detect the above ground areas. Color analysis is then performed for the exclusion of tree areas. Potential building blocks are selected first followed by the refinement of building areas. In the second step, through edge detection and extracting the height information from LIDAR data, accurate 3D edges in object space is calculated. The accurate 3D edges are combined with the already developed SMS method for building modeling. LIDAR data acquired by Leica ALS 40 in Hsin-Chu Science-based Industrial Park of north Taiwan will be used in the test.

  • PDF

An Evaluation Method of Taekwondo Poomsae Performance

  • Thi Thuy Hoang;Heejune Ahn
    • Journal of information and communication convergence engineering
    • /
    • 제21권4호
    • /
    • pp.337-345
    • /
    • 2023
  • In this study, we formulated a method that evaluates Taekwondo Poomsae performance using a series of choreographed training movements. Despite recent achievements in 3D human pose estimation (HPE) performance, the analysis of human actions remains challenging. In particular, Taekwondo Poomsae action analysis is challenging owing to the absence of time synchronization data and necessity to compare postures, rather than directly relying on joint locations owing to differences in human shapes. To address these challenges, we first decomposed human joint representation into joint rotation (posture) and limb length (body shape), then synchronized a comparison between test and reference pose sequences using DTW (dynamic time warping), and finally compared pose angles for each joint. Experimental results demonstrate that our method successfully synchronizes test action sequences with the reference sequence and reflects a considerable gap in performance between practitioners and professionals. Thus, our method can detect incorrect poses and help practitioners improve accuracy, balance, and speed of movement.

Wire bonding 자동 전단력 검사를 위한 wire의 3차원 위치 측정 시스템 개발 (3D Measurement System of Wire for Automatic Pull Test of Wire Bonding)

  • 고국원;김동현;이지연;이상준
    • 제어로봇시스템학회논문지
    • /
    • 제21권12호
    • /
    • pp.1130-1135
    • /
    • 2015
  • The bond pull test is the most widely used technique for the evaluation and control of wire bond quality. The wire being tested is pulled upward until the wire or bond to the die or substrate breaks. The inspector test strength of wire by manually and it takes around 3 minutes to perform the test. In this paper, we develop a 3D vision system to measure 3D position of wire. It gives 3D position data of wire to move a hook into wires. The 3D measurement method to use here is a confocal imaging system. The conventional confocal imaging system is a spot scanning method which has a high resolution and good illumination efficiency. However, a conventional confocal systems has a disadvantage to perform XY axis scanning in order to achieve 3D data in given FOV (Field of View) through spot scanning. We propose a method to improve a parallel mode confocal system using a micro-lens and pin-hole array to remove XY scan. 2D imaging system can detect 2D location of wire and it can reduce time to measure 3D position of wire. In the experimental results, the proposed system can measure 3D position of wire with reasonable accuracy.

3차원 선소의 Grouping에 의한 3차원 건물 모델 발생 (Generation of 3D Building Model by Grouping of 3D Line Segments)

  • 강연욱;우동민
    • 전기전자학회논문지
    • /
    • 제10권1호
    • /
    • pp.40-48
    • /
    • 2006
  • 본 논문에서는 3차원 선소로부터 건물의 rooftop 평면을 추정하는 새로운 기법이 제안되었다. 3차원 rooftop 평면 추정은 3차원 선소의 계층적 grouping에 기반한 것으로, 끊어진 3차원 선소의 병합으로부터 시작된다. 병합된 3차원 선소는 평면 추정기법에 의해 rooftop 검출에 적용되는데, 평면 추정을 위해 T자형 모서리 및 L자형 모서리 검출을 통해서 신뢰성 있는 접속점이 구해진다. 구해진 접속점에 의해 가정된 rooftop 평면이 발생될 수 있으며, 건물 평면의 속성에 의해 최종적으로 검증되어, 건물의 rooftop 모델이 결정된다. Avenches 항공영상 데이터로부터 구해진 모의영상에 의해 실험이 수행되었는데, 실험 결과 0.4 - 1.3 meter의 오차를 가진 rooftop 평면 모델이 구해졌으며, 이는 종래의 영역기반 스테레오에 의해 구해진 고도에 비해 정확도가 2.5배 정도 향상되었음을 알 수 있었다.

  • PDF

Enhanced 3D Residual Network for Human Fall Detection in Video Surveillance

  • Li, Suyuan;Song, Xin;Cao, Jing;Xu, Siyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3991-4007
    • /
    • 2022
  • In the public healthcare, a computational system that can automatically and efficiently detect and classify falls from a video sequence has significant potential. With the advancement of deep learning, which can extract temporal and spatial information, has become more widespread. However, traditional 3D CNNs that usually adopt shallow networks cannot obtain higher recognition accuracy than deeper networks. Additionally, some experiences of neural network show that the problem of gradient explosions occurs with increasing the network layers. As a result, an enhanced three-dimensional ResNet-based method for fall detection (3D-ERes-FD) is proposed to directly extract spatio-temporal features to address these issues. In our method, a 50-layer 3D residual network is used to deepen the network for improving fall recognition accuracy. Furthermore, enhanced residual units with four convolutional layers are developed to efficiently reduce the number of parameters and increase the depth of the network. According to the experimental results, the proposed method outperformed several state-of-the-art methods.

3D WALK-THROUGH ENVIRONMENTAL MODEL FOR VISUALIZATION OF INTERIOR CONSTRUCTION PROGRESS MONITORING

  • Seungjun Roh;Feniosky Pena-Mora
    • 국제학술발표논문집
    • /
    • The 3th International Conference on Construction Engineering and Project Management
    • /
    • pp.920-927
    • /
    • 2009
  • Many schedule delays and cost overruns in interior construction are caused by a lack of understanding in detailed and complicated interior works. To minimize these potential impacts in interior construction, a systematic approach for project managers to detect discrepancies at early stages and take corrective action through use of visualized data is required. This systematic implementation is still challenging: monitoring is time-consuming due to the significant amount of as-built data that needs to be collected and evaluated; and current interior construction progress reports have visual limitations in providing spatial context and in representing the complexities of interior components. To overcome these issues, this research focuses on visualization and computer vision techniques representing interior construction progress with photographs. The as-planned 3D models and as-built photographs are visualized in a 3D walk-through model. Within such an environment, the as-built interior construction elements are detected through computer vision techniques to automatically extract the progress data linked with Building Information Modeling (BIM). This allows a comparison between the as-planned model and as-built elements to be used for the representation of interior construction progress by superimposing over a 3D environment. This paper presents the process of representing and detecting interior construction components and the results for an ongoing construction project. This paper discusses implementation and future potential enhancement of these techniques in construction.

  • PDF

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제10권12호
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF