• Title/Summary/Keyword: 3-D feature extraction

Search Result 202, Processing Time 0.023 seconds

Recognition and Modeling of 3D Environment based on Local Invariant Features (지역적 불변특징 기반의 3차원 환경인식 및 모델링)

  • Jang, Dae-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.3
    • /
    • pp.31-39
    • /
    • 2006
  • This paper presents a novel approach to real-time recognition of 3D environment and objects for various applications such as intelligent robots, intelligent vehicles, intelligent buildings,..etc. First, we establish the three fundamental principles that humans use for recognizing and interacting with the environment. These principles have led to the development of an integrated approach to real-time 3D recognition and modeling, as follows: 1) It starts with a rapid but approximate characterization of the geometric configuration of workspace by identifying global plane features. 2) It quickly recognizes known objects in environment and replaces them by their models in database based on 3D registration. 3) It models the geometric details the geometric details on the fly adaptively to the need of the given task based on a multi-resolution octree representation. SIFT features with their 3D position data, referred to here as stereo-sis SIFT, are used extensively, together with point clouds, for fast extraction of global plane features, for fast recognition of objects, for fast registration of scenes, as well as for overcoming incomplete and noisy nature of point clouds.

  • PDF

Prototype Extraction for the Categorization of Lotus and Crane Patterns Using Qualitative and Quantitative Approaches (질적, 양적 접근방법에 의한 연화문, 사문의 분류원형 추출)

  • 장수경;김재숙
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.20 no.6
    • /
    • pp.1016-1026
    • /
    • 1996
  • The purpose of this study was to extract protypes from features and concrete images of Lotus and Crane patterns. A qualitative and a quantitative methods were used. Qualitative informations were obtained from depth Interviews for pattern selection and feature extraction, and quantitative informations from a quail-experiment for pattern caregorization. The subjects were 20 female design students and non-design, students in Teajon. The results were summerized into a similarity metrix which was interpreted by the cluster analysis and the multi-dimensional scling(MDS). The patterns for the study were grouped into 8 clusters. Four dimensions were chosen for the MDS. The location of each pattern was visualized in a 2-dimesional spaces and the location of each cluster in a 3-dimensional spaces. The first dimension, "Lotus" vs "Crane" refired to pattern types, and the second dimension, "realistic" vs "transformable", the transformability. The third dimension, "simple" vs "complex", refired to the degree of simplification, and the forth dimension, "continuous" vs "discontinuous", continuity. The results of the Quantitative analysis could be summerized into 3-level prototype hiararchy In the first level, the patterns were devided clearly into two groups. Lotus and Crane by pattern types. In the second levelk, each group was devided into twosubgroups by continuity. In the third, each subgroup was divided into four subgroups by transformability and the degree of simplification. Four protypes, the final targets of the present study, were extracted from the third level. The are Stylized, Realistic, Decorative, Abstract types.d from the third level. The are Stylized, Realistic, Decorative, Abstract types.

  • PDF

Face Recognition Using Local Statistics of Gradients and Correlations (그래디언트와 상관관계의 국부통계를 이용한 얼굴 인식)

  • Ju, Yingai;So, Hyun-Joo;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.3
    • /
    • pp.19-29
    • /
    • 2011
  • Until now, many face recognition methods have been proposed, most of them use a 1-dimensional feature vector which is vectorized the input image without feature extraction process or input image itself is used as a feature matrix. It is known that the face recognition methods using raw image yield deteriorated performance in databases whose have severe illumination changes. In this paper, we propose a face recognition method using local statistics of gradients and correlations which are good for illumination changes. BDIP (block difference of inverse probabilities) is chosen as a local statistics of gradients and two types of BVLC (block variation of local correlation coefficients) is chosen as local statistics of correlations. When a input image enters the system, it extracts the BDIP, BVLC1 and BVLC2 feature images, fuses them, obtaining feature matrix by $(2D)^2$ PCA transformation, and classifies it with training feature matrix by nearest classifier. From experiment results of four face databases, FERET, Weizmann, Yale B, Yale, we can see that the proposed method is more reliable than other six methods in lighting and facial expression.

Extraction of the 3-Dimensional Information Using Relaxation Technique (Relaxation Techique을 이용한 3차원 정보의 추출)

  • Kim, Yeong-Gu;Cho, Dong-Uk;Choi, Byeong-Uk
    • Proceedings of the KIEE Conference
    • /
    • 1987.07b
    • /
    • pp.1077-1080
    • /
    • 1987
  • Images are 2-dimensional projection of 3-dimensional scenes and many problems of scene analysis arise due to inherent depth ambiguities in a monocular 2-D image. Therefore, depth recovery is a crucial problem in image understanding. This paper proposes modified algorithm which is focused on accurate correspondnce in stereo vision. The feature we use is zero-crossing points and the similarity measure with two property evaluation function is used to estimate initial probability. And we introduce relaxation technique for accurate and global correspondence.

  • PDF

2D-MELPP: A two dimensional matrix exponential based extension of locality preserving projections for dimensional reduction

  • Xiong, Zixun;Wan, Minghua;Xue, Rui;Yang, Guowei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.9
    • /
    • pp.2991-3007
    • /
    • 2022
  • Two dimensional locality preserving projections (2D-LPP) is an improved algorithm of 2D image to solve the small sample size (SSS) problems which locality preserving projections (LPP) meets. It's able to find the low dimension manifold mapping that not only preserves local information but also detects manifold embedded in original data spaces. However, 2D-LPP is simple and elegant. So, inspired by the comparison experiments between two dimensional linear discriminant analysis (2D-LDA) and linear discriminant analysis (LDA) which indicated that matrix based methods don't always perform better even when training samples are limited, we surmise 2D-LPP may meet the same limitation as 2D-LDA and propose a novel matrix exponential method to enhance the performance of 2D-LPP. 2D-MELPP is equivalent to employing distance diffusion mapping to transform original images into a new space, and margins between labels are broadened, which is beneficial for solving classification problems. Nonetheless, the computational time complexity of 2D-MELPP is extremely high. In this paper, we replace some of matrix multiplications with multiple multiplications to save the memory cost and provide an efficient way for solving 2D-MELPP. We test it on public databases: random 3D data set, ORL, AR face database and Polyu Palmprint database and compare it with other 2D methods like 2D-LDA, 2D-LPP and 1D methods like LPP and exponential locality preserving projections (ELPP), finding it outperforms than others in recognition accuracy. We also compare different dimensions of projection vector and record the cost time on the ORL, AR face database and Polyu Palmprint database. The experiment results above proves that our advanced algorithm has a better performance on 3 independent public databases.

Computer Vision Platform Design with MEAN Stack Basis (MEAN Stack 기반의 컴퓨터 비전 플랫폼 설계)

  • Hong, Seonhack;Cho, Kyungsoon;Yun, Jinseob
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.1-9
    • /
    • 2015
  • In this paper, we implemented the computer vision platform design with MEAN Stack through Raspberry PI 2 model which is an open source platform. we experimented the face recognition, temperature and humidity sensor data logging with WiFi communication under Raspberry Pi 2 model. Especially we directly made the shape of platform with 3D printing design. In this paper, we used the face recognition algorithm with OpenCV software through haarcascade feature extraction machine learning algorithm, and extended the functionality of wireless communication function ability with Bluetooth technology for the purpose of making Android Mobile devices interface. And therefore we implemented the functions of the vision platform for identifying the face recognition characteristics of scanning with PI camera with gathering the temperature and humidity sensor data under IoT environment. and made the vision platform with 3D printing technology. Especially we used MongoDB for developing the performance of vision platform because the MongoDB is more akin to working with objects in a programming language than what we know of as a database. Afterwards, we would enhance the performance of vision platform for clouding functionalities.

3D Face Recognition using Projection Vectors for the Area in Contour Lines (등고선 영역의 투영 벡터를 이용한 3차원 얼굴 인식)

  • 이영학;심재창;이태홍
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.2
    • /
    • pp.230-239
    • /
    • 2003
  • This paper presents face recognition algorithm using projection vector reflecting local feature for the area in contour lines. The outline shape of a face has many difficulties to distinguish people because human has similar face shape. For 3 dimensional(3D) face images include depth information, we can extract different face shapes from the nose tip using some depth values for a face image. In this thesis deals with 3D face image, because the extraction of contour lines from 2 dimensional face images is hard work. After finding nose tip, we extract two areas in the contour lilies from some depth values from 3D face image which is obtained by 3D laser scanner. And we propose a method of projection vector to localize the characteristics of image and reduce the number of index data in database. Euclidean distance is used to compare of similarity between two images. Proposed algorithm can be made recognition rate of 94.3% for face shapes using depth information.

  • PDF

ACCURACY ASSESSMENT BY REFINING THE RATIONAL POLYNOMIALS COEFFICIENTS(RPCs) OF IKONOS IMAGERY

  • LEE SEUNG-CHAN;JUNG HYUNG-SUP;WON JOONG-SUN
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.344-346
    • /
    • 2004
  • IKONOS 1m satellite imagery is particularly well suited for 3-D feature extraction and 1 :5,000 scale topographic mapping. Because the image line and sample calculated by given RPCs have the error of more than 11m, in order to be able to perform feature extraction and topographic mapping, rational polynomial coefficients(RPCs) camera model that are derived from the very complex IKONOS sensor model to describe the object-image geometry must be refined by several Ground Control Points(GCPs). This paper presents a quantitative evaluation of the geometric accuracy that can be achieved with IKONOS imagery by refining the offset and scaling factors of RPCs using several GCPs. If only two GCPs are available, the offsets and scale factors of image line and sample are updated. If we have more than three GCPs, four parameters of the offsets and scale factors of image line and sample are refined first, and then six parameters of the offsets and scale factors of latitude, longitude and height are updated. The stereo images acquired by IKONOS satellite are tested using six ground points. First, the RPCs model was refined using 2 GCPs and 4 check points acquired by GPS. The results from IKONOS stereo images are reported and these show that the RMSE of check point acquired from left images and right are 1.021m and 1.447m. And then we update the RPCs model using 4 GCPs and 2 check points. The RMSE of geometric accuracy is 0.621 m in left image and 0.816m in right image.

  • PDF

Parameter Extraction and Simulation in order to Manufacture Ready-made Ear Shell for CIC Type Hearing Aids (CIC형 보청기용 범용 이어쉘 제작을 위한 파라미터 추출 및 시뮬레이션)

  • U, Erdenebayar.;Jeon, Y.Y.;Park, G.S.;Song, Y.R.;Lee, S.M.
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.4
    • /
    • pp.321-327
    • /
    • 2010
  • Most of the ear shells of hearing aids are manufactured manually, and it is one of the reasons that the cost of the custom-made hearing aids can be increased. Thus it is required to manufacture the ready-made ear shell for the purpose of easy manufacturing and decrease in cost. In this study, we extract parameters in order to manufacture the ready-made ear shell for CIC type hearing aids and simulate to reconstruct the ear shell using the extracted parameters. To parameter extraction, we set up the eleven parameters for the ready-made ear shell based on anatomical characteristics of the ear canal, and we found values of the parameters from twenty-one impressions in their 20s and twelve impressions in their 60s using aperture detection and feature detection algorithms. Classifying the parameters by size, we also determine the parameters of ready-made ear shell into three types for people in their 20s and two types for people in their 60s. Each ready-made ear shell was simulated to reconstruct using figured parameters, and evaluated the rate of agreement with unused impressions for setting parameters. To evaluate the ready-made ear shell, we calculate the volume ratio and intersection between of the each impression and ready-made ear shell, and the intersection ratio using the intersection volume and ready-made ear shell volume. As a result, the volume ratio was about 70%, and volume match ratio was also up to 70%. It means that the ready-made ear shell we simulated is the significantly matched to impression.

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.