• 제목/요약/키워드: 3D 체인코드

Search Result 8, Processing Time 0.03 seconds

Application of 3D Chain Code for Object Recognition and Analysis (객체인식과 분석을 위한 3D 체인코드의 적용)

  • Park, So-Young;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.5
    • /
    • pp.459-469
    • /
    • 2011
  • There are various factors for determining object shape, such as size, slope and its direction, curvature, length, surface, angles between lines or planes, distribution of the model key points, and so on. Most of the object description and recognition methods are for the 2D space not for the 3D object space where the objects actually exist. In this study, 3D chain code operator, which is basically extension of 2D chain code, was proposed for object description and analysis in 3D space. Results show that the sequence of the 3D chain codes could be basis of a top-down approach for object recognition and modeling. In addition, the proposed method could be applicable to segment point cloud data such as LiDAR data.

Line Segments Extraction by using Chain Code Tracking of Edge Map from Aerial Images (항공영상으로부터 에지 맵의 체인코드 추적에 의한 선소추출)

  • Lee Kyu-won;Woo Dong-min
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.6
    • /
    • pp.709-713
    • /
    • 2005
  • A new algorithm is proposed for the extraction of line segments to construct 3D wire-frame models of building from the high-resolution aerial images. The purpose of this study Is the accurate and effective extraction of line segments, considering the problems such as discordance of lines and blurred edges existing in the conventional methods. Using the edge map extracted from aerial images, chain code tracking of edges was performed. Then, we extract the line segments considering the strength of edges and the direction of them. SUSAN (Smallest Uni-value Segment Assimilating Nucleus) algorithm proposed by Smith was used to extract an edge map. The proposed algorithm consists of 4 steps: removal of the horizontal, vertical and diagonal components of edges to reduce non-candidate point of line segments based on the chain code tracking of the edge map, removal of contiguous points, removal of the same angle points, and the extraction of the start and end points to be line segments. By comparing the proposed algorithm with Boldt algorithm, better results were obtained regarding the extraction of the representative line segments of buildings, having relatively less extraction of unnecessary line segments.

Recognition of Fighting Motion using a 3D-Chain Code and HMM (3차원 체인코드와 은닉마르코프 모델을 이용한 권투모션 인식)

  • Han, Chang-Ho;Oh, Choon-Suk;Choi, Byung-Wook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.8
    • /
    • pp.756-760
    • /
    • 2010
  • In this paper, a new method to recognize various motions of fighting with an aid of HMM is proposed. There are four kinds of fighting motion such as hook, jab, uppercut, and straight as the fighting motion. The motion graph is generalized to define each motion in motion data and the new 3D-chain code is used to convert motion data to motion graphs. The recognition experiment has been performed with HMM algorithm on motion graphs. The motion data is captured by a motion capture system developed in this study and by five actors. Experimental results are given with relatively high recognition rate of at least 85%.

Hand Gesture Recognition Method based on the MCSVM for Interaction with 3D Objects in Virtual Reality (가상현실 3D 오브젝트와 상호작용을 위한 MCSVM 기반 손 제스처 인식)

  • Kim, Yoon-Je;Koh, Tack-Kyun;Yoon, Min-Ho;Kim, Tae-Young
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2017.11a
    • /
    • pp.1088-1091
    • /
    • 2017
  • 최근 그래픽스 기반의 가상현실 기술의 발전과 관심이 증가하면서 3D 객체와의 자연스러운 상호작용을 위한 방법들 중 손 제스처 인식에 대한 연구가 활발히 진행되고 있다. 본 논문은 가상현실 3D 오브젝트와의 상호작용을 위한 MCSVM 기반의 손 제스처 인식을 제안한다. 먼저 다양한 손 제스처들을 립모션을 통해 입력 받아 전처리를 수행한 손 데이터를 전달한다. 그 후 이진 결정 트리로 1차 분류를 한 손 데이터를 리샘플링 한 뒤 체인코드를 생성하고 이에 대한 히스토그램으로 특징 데이터를 구성한다. 이를 기반으로 MCSVM 학습을 통해 2차 분류를 수행하여 제스처를 인식한다. 실험 결과 3D 오브젝트와 상호작용을 위한 16개의 명령 제스처에 대해 평균 99.2%의 인식률을 보였고 마우스 인터페이스와 비교한 정서적 평가 결과에서는 마우스 입력에 비하여 직관적이고 사용자 친화적인 상호작용이 가능하다는 점에서 게임, 학습 시뮬레이션, 설계, 의료분야 등 많은 가상현실 응용 분야에서의 입력 인터페이스로 활용 될 수 있고 가상현실에서 몰입도를 높이는데 도움이 됨을 알 수 있었다.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Gesture Recognition Method using Tree Classification and Multiclass SVM (다중 클래스 SVM과 트리 분류를 이용한 제스처 인식 방법)

  • Oh, Juhee;Kim, Taehyub;Hong, Hyunki
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.238-245
    • /
    • 2013
  • Gesture recognition has been widely one of the research areas for natural user interface. This paper presents a novel gesture recognition method using tree classification and multiclass SVM(Support Vector Machine). In the learning step, 3D trajectory of human gesture obtained by a Kinect sensor is classified into the tree nodes according to their distributions. The gestures are resampled and we obtain the histogram of the chain code from the normalized data. Then multiclass SVM is applied to the classified gestures in the node. The input gesture classified using the constructed tree is recognized with multiclass SVM.

High-resolution 3D Object Reconstruction using Multiple Cameras (다수의 카메라를 활용한 고해상도 3차원 객체 복원 시스템)

  • Hwang, Sung Soo;Yoo, Jisung;Kim, Hee-Dong;Kim, Sujung;Paeng, Kyunghyun;Kim, Seong Dae
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.150-161
    • /
    • 2013
  • This paper presents a new system which produces high resolution 3D contents by capturing multiview images of an object using multiple cameras, and estimating geometric and texture information of the object from the captured images. Even though a variety of multiview image-based 3D reconstruction systems have been proposed, it was difficult to generate high resolution 3D contents because multiview image-based 3D reconstruction requires a large amount of memory and computation. In order to reduce computational complexity and memory size for 3D reconstruction, the proposed system predetermines the regions in input images where an object can exist to extract object boundaries fast. And for fast computation of a visual hull, the system represents silhouettes and 3D-2D projection/back-projection relations by chain codes and 1D homographies, respectively. The geometric data of the reconstructed object is compactly represented by a 3D segment-based data format which is called DoCube, and the 3D object is finally reconstructed after 3D mesh generation and texture mapping are performed. Experimental results show that the proposed system produces 3D object contents of $800{\times}800{\times}800$ resolution with a rate of 2.2 seconds per frame.

Numerical modeling of tidal discharge through a permeable dyke from varying surface gradients (내·외 수위차를 이용한 투수성 제체의 조류량 모델링)

  • Hong, Seong Soo;Kim, Tae In;Nguyen, Thao Thi Hoang;Gu, Jeong Bon
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.219-219
    • /
    • 2021
  • 서해안 중부 아산만 안쪽에 위치하는 평택·당진항에서 장래 개발 예정인 면적 6.9km2의 내항2공구 수역은 내항2공구 외곽호안 - 내항가호안 - 내항2공구 중앙 분리호안으로 둘러싸여 있으며, 투수성 제체인 내항가호안 사석 공극을 통하여 해수가 유통되어 조석 현상이 나타나고 있다. 2020년 8~9월의 2개월간 내항2공구 외곽호안 내·외측에서 조석 관측 결과, 2공구 수역의 최대 조차는 1.97m로서 외측 해역 최대 조차 9.79m의 20.1%이고 내·외측의 순간 수위차는 최대 5.82m에 달한다. 내항가호안은 내항2공구 개발이 거의 완료되는 시기까지 유지될 예정이므로 2공구 개발에 따른 내측 조차와 내·외측 수위차의 변화를 정확하게 예측하는 것은 내항가호안 제체 안전에 매우 중요하다. 이 연구의 목적은 장래 개발단계별 변화 예측에 앞서, 관측이 이루어진 2개월간의 실시간 내측 조석과 내·외측 수위차 시계열을 Delft3D-Flow를 이용하여 기 구축된 아산만 수치모델에서 재현하는 것이다. 내항가호안 제체 통과 유량은 내·외측 수위차에 비례하는 것으로 가정하고, 수위차 - 유량 관계식을 도출하였다. 수위차는 평택 조위관측소와 내항2공구 수역의 1분 간격 관측 조위로부터 산출하였고, 제체 통과 유량은 내측 조위(z, 평택항 DL 기준, m) - 수용적(V, 106m3) 관계식으로 계산하였다. 내측 조위 - 수용적 관계식은 수심측량 성과로부터 V = 0.28z2 + 3.73z + 2.96 (r2=1.00)으로 얻어졌다. 다양한 함수식의 적합성을 검토한 결과, 다음과 같은 수위차(𝚫z, m) - 제체 통과 유량(Q, m3/s) 관계식을 도출하였다. [내항가호안 내측으로 유입시] $Q_{IN}=\{\begin{array}{lll}{\exp}\{0.54\;{\ln}({\Delta}z)+6.00\}&&\text{; }{\Delta}z{\leq}1.8\\219.82{\Delta}z+158.56&&\text{; }{\Delta}z>1.8\end{array}\;\;(r^2=0.86)$ [내항가호안 외측으로 유출시] QOUT = -exp{0.44 ln(-𝚫z) + 5.70} (r2=0.59) 매 𝚫t 마다 제체 통과 유량을 계산하는 알고리즘을 Delft3D 소스 코드에 추가하고, 8개 분조 합성조석(M2, S2, K1, O1, N2, K2, P1, Q1)을 외력조건으로 설정하여 2개월간 조석 수치모델링을 수행하였다. 내항2공구 수역의 매 시별 조위 관측치와 모델치를 비교한 결과, 오차는 -0.37~0.37m의 범위이고, 오차 평균은 0.02m, 절대오차 평균은 0.08m로 상당히 정확하게 실시간 조위 변동을 모의하였다. 보정·검정된 이 모델을 이용하여 향후 내항2공구 개발에 따른 내측 조석과 내·외측 수위차 변화에 대한 예측모의를 진행할 예정이다.

  • PDF