• Title/Summary/Keyword: feature reconstruction

Search Result 218, Processing Time 0.031 seconds

A 3D Face Reconstruction Based on the Symmetrical Characteristics of Side View 2D Face Images (측면 2차원 얼굴 영상들의 대칭성을 이용한 3차원 얼굴 복원)

  • Lee, Sung-Joo;Park, Kang-Ryoung;Kim, Jai-Hie
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.103-110
    • /
    • 2011
  • A widely used 3D face reconstruction method, structure from motion(SfM), shows robust performance when frontal, left, and right face images are used. However, this method cannot reconstruct a self-occluded facial part correctly when only one side view face images are used because only partial facial feature points can be used in this case. In order to solve the problem, the proposed method exploit a constrain that is bilateral symmetry of human faces in order to generate bilateral facial feature points and use both input facial feature points and generated facial feature points to reconstruct a 3D face. For quantitative evaluation of the proposed method, 3D faces were obtained from a 3D face scanner and compared with the reconstructed 3D faces. The experimental results show that the proposed 3D face reconstruction method based on both facial feature points outperforms the previous 3D face reconstruction method based on only partial facial feature points.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

3D Shape Reconstruction of Cross-sectional Images using Image Processing Technology and B-spline Approximation (영상 처리 기법과 B-spline 근사화를 이용한 단면영상의 3차원 재구성)

  • 임오강;이진식;김종구
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2001.10a
    • /
    • pp.93-100
    • /
    • 2001
  • The three dimensional(3D) reconstruction from two dimensional(2D) image data is using in many fields such as RPD(Rapid Product Development) and reverse engineering. In this paper, the main step of 3D reconstruction is comprised of two steps : image processing step and B-spline surface approximation step. In the image processing step, feature points of each cross-section are obtained by means of several image processing technologies. In the B-spline surface approximation step, using the data of feature points obtained in the image processing step, the control points of B-spline surface are obtained, which are used for IGES file of 3D CAD model.

  • PDF

Facial Expression Recognition Method Based on Residual Masking Reconstruction Network

  • Jianing Shen;Hongmei Li
    • Journal of Information Processing Systems
    • /
    • v.19 no.3
    • /
    • pp.323-333
    • /
    • 2023
  • Facial expression recognition can aid in the development of fatigue driving detection, teaching quality evaluation, and other fields. In this study, a facial expression recognition method was proposed with a residual masking reconstruction network as its backbone to achieve more efficient expression recognition and classification. The residual layer was used to acquire and capture the information features of the input image, and the masking layer was used for the weight coefficients corresponding to different information features to achieve accurate and effective image analysis for images of different sizes. To further improve the performance of expression analysis, the loss function of the model is optimized from two aspects, feature dimension and data dimension, to enhance the accurate mapping relationship between facial features and emotional labels. The simulation results show that the ROC of the proposed method was maintained above 0.9995, which can accurately distinguish different expressions. The precision was 75.98%, indicating excellent performance of the facial expression recognition model.

A Study on Real-Time Localization and Map Building of Mobile Robot using Monocular Camera (단일 카메라를 이용한 이동 로봇의 실시간 위치 추정 및 지도 작성에 관한 연구)

  • Jung, Dae-Seop;Choi, Jong-Hoon;Jang, Chul-Woong;Jang, Mun-Suk;Kong, Jung-Shik;Lee, Eung-Hyuk;Shim, Jae-Hong
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.536-538
    • /
    • 2006
  • The most important factor of mobile robot is to build a map for surrounding environment and estimate its localization. This paper proposes a real-time localization and map building method through 3-D reconstruction using scale invariant feature from monocular camera. Mobile robot attached monocular camera looking wall extracts scale invariant features in each image using SIFT(Scale Invariant Feature Transform) as it follows wall. Matching is carried out by the extracted features and matching feature map that is transformed into absolute coordinates using 3-D reconstruction of point and geometrical analysis of surrounding environment build, and store it map database. After finished feature map building, the robot finds some points matched with previous feature map and find its pose by affine parameter in real time. Position error of the proposed method was maximum. 8cm and angle error was within $10^{\circ}$.

  • PDF

SIFT-based Stereo Matching to Compensate Occluded Regions and Remove False Matching for 3D Reconstruction

  • Shin, Do-Kyung;Lee, Jeong-Ho;Moon, Young-Shik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.418-422
    • /
    • 2009
  • Generally, algorithms for generating disparity maps can be clssified into two categories: region-based method and feature-based method. The main focus of this research is to generate a disparity map with an accuracy depth information for 3-dimensional reconstructing. Basically, the region-based method and the feature-based method are simultaneously included in the proposed algorithm, so that the existing problems including false matching and occlusion can be effectively solved. As a region-based method, regions of false matching are extracted by the proposed MMAD(Modified Mean of Absolute Differences) algorithm which is a modification of the existing MAD(Mean of Absolute Differences) algorithm. As a feature-based method, the proposed method eliminates false matching errors by calculating the vector with SIFT and compensates the occluded regions by using a pair of adjacent SIFT matching points, so that the errors are reduced and the disparity map becomes more accurate.

  • PDF

A Study on the Source Reconstruction Feature Usig the Extended Proy Method (확장 Prony법을 이용한 음원 재구성특성에 관한 연구)

  • 이금원;김경기
    • Journal of Biomedical Engineering Research
    • /
    • v.11 no.2
    • /
    • pp.289-294
    • /
    • 1990
  • In this paper, for acoustic source reconstruction using angular frequency propagation method, the extended Prony method is propsed which is useful due to not having the inherent property of DFT. The simulation is carried out and its improved results are shown explicitly by comparing with DFT case.

  • PDF

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Study on Three-dimension Reconstruction to Low Resolution Image of Crops (작물의 저해상도 이미지에 대한 3차원 복원에 관한 연구)

  • Oh, Jang-Seok;Hong, Hyung-Gil;Yun, Hae-Yong;Cho, Yong-Jun;Woo, Seong-Yong;Song, Su-Hwan;Seo, Kap-Ho;Kim, Dae-Hee
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.18 no.8
    • /
    • pp.98-103
    • /
    • 2019
  • A more accurate method of feature point extraction and matching for three-dimensional reconstruction using low-resolution images of crops is proposed herein. This method is important in basic computer vision. In addition to three-dimensional reconstruction from exact matching, map-making and camera location information such as simultaneous localization and mapping can be calculated. The results of this study suggest applicable methods for low-resolution images that produce accurate results. This is expected to contribute to a system that measures crop growth condition.