• Title/Summary/Keyword: 3D object

Search Result 2,128, Processing Time 0.03 seconds

The Accuracy Analysis of 3D Image Generation by Digital Photogrammetry (수치사진측량 기반 3차원영상생성 정확도 분석)

  • 강준묵;엄대용;임영빈
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2003.10a
    • /
    • pp.157-162
    • /
    • 2003
  • The 3D Image which embodies real object to 3D space of computer enables various geometrical analysis as well as visualization of complex 3D shape by giving sense for the real and cubic effect that can not be offered in 2D image. Human gives real object to same physical properties in 3D space imagination world of computer, and it is expected that this enables offering of various information by user strengthening interface between human-computer to observe object in real condition. In this study, formal style routine of 3D image creation applying digital photogrammetry was designed for more practical, highly trusty 3D image creation, and the system was emboded using object-oriented technique which strengthen user interface. Also, the discontinuity information about rock slope using 3D image is acquired that is orientation, persistence, spacing and aperture etc.

  • PDF

Measurement of 3-D range-image of object diagnolly moving against semiconductor laser light beam

  • Shinohara, Shigenobu;Ichioka, Yoshiyuki;Ikeda, Hiroaki;Yoshida, Hirofumi;Sumi, Masao
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.299-302
    • /
    • 1995
  • Recently, we proposed a 3-D range-image measuring system for a slowly moving object by mechanically scanning a laser light beam emitted from a self mixing laser diode. In this paper, we introduced that every object moves along a straight line course, which is set diagonally against the semiconductor laser beam so that we can recognize each shape and size parameters of objects separately from the acquired 3-D range-image. We measured a square mesa on a square plane as an object. The measured velocity was 4.44mm/s and 4.63mm/s with an error of 0.56mm/s to 0.37mm/s. And thickness error of the mesa was 0.5mm to 0.6mm, which was obtained from the 3-D range-image of the standstill or moving object with thickness of 17.Omm.

  • PDF

Using a Multi-Faced Technique SPFACS Video Object Design Analysis of The AAM Algorithm Applies Smile Detection (다면기법 SPFACS 영상객체를 이용한 AAM 알고리즘 적용 미소검출 설계 분석)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.11 no.3
    • /
    • pp.99-112
    • /
    • 2015
  • Digital imaging technology has advanced beyond the limits of the multimedia industry IT convergence, and to develop a complex industry, particularly in the field of object recognition, face smart-phones associated with various Application technology are being actively researched. Recently, face recognition technology is evolving into an intelligent object recognition through image recognition technology, detection technology, the detection object recognition through image recognition processing techniques applied technology is applied to the IP camera through the 3D image object recognition technology Face Recognition been actively studied. In this paper, we first look at the essential human factor, technical factors and trends about the technology of the human object recognition based SPFACS(Smile Progress Facial Action Coding System)study measures the smile detection technology recognizes multi-faceted object recognition. Study Method: 1)Human cognitive skills necessary to analyze the 3D object imaging system was designed. 2)3D object recognition, face detection parameter identification and optimal measurement method using the AAM algorithm inside the proposals and 3)Face recognition objects (Face recognition Technology) to apply the result to the recognition of the person's teeth area detecting expression recognition demonstrated by the effect of extracting the feature points.

Application of 3D Chain Code for Object Recognition and Analysis (객체인식과 분석을 위한 3D 체인코드의 적용)

  • Park, So-Young;Lee, Dong-Cheon
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.29 no.5
    • /
    • pp.459-469
    • /
    • 2011
  • There are various factors for determining object shape, such as size, slope and its direction, curvature, length, surface, angles between lines or planes, distribution of the model key points, and so on. Most of the object description and recognition methods are for the 2D space not for the 3D object space where the objects actually exist. In this study, 3D chain code operator, which is basically extension of 2D chain code, was proposed for object description and analysis in 3D space. Results show that the sequence of the 3D chain codes could be basis of a top-down approach for object recognition and modeling. In addition, the proposed method could be applicable to segment point cloud data such as LiDAR data.

LSG;(Local Surface Group); A Generalized Local Feature Structure for Model-Based 3D Object Recognition (LSG:모델 기반 3차원 물체 인식을 위한 정형화된 국부적인 특징 구조)

  • Lee, Jun-Ho
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.573-578
    • /
    • 2001
  • This research proposes a generalized local feature structure named "LSG(Local Surface Group) for model-based 3D object recognition". An LSG consists of a surface and its immediately adjacent surface that are simultaneously visible for a given viewpoint. That is, LSG is not a simple feature but a viewpoint-dependent feature structure that contains several attributes such as surface type. color, area, radius, and simultaneously adjacent surface. In addition, we have developed a new method based on Bayesian theory that computes a measure of how distinct an LSG is compared to other LSGs for the purpose of object recognition. We have experimented the proposed methods on an object databaed composed of twenty 3d object. The experimental results show that LSG and the Bayesian computing method can be successfully employed to achieve rapid 3D object recognition.

  • PDF

Research for 3-D Information Reconstruction by Appling Composition Focus Measure Function to Time-series Image (복합초점함수의 시간열 영상적용을 통한 3 차원정보복원에 관한 연구)

  • 김정길;한영준;한헌수
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.426-429
    • /
    • 2004
  • To reconstruct the 3-D information of a irregular object, this paper proposes a new method applying the composition focus measure to time-series image. A focus measure function is carefully selected because a focus measure is apt to be affected by the working environment and the characteristics of an object. The proposed focus measure function combines the variance measure which is robust to noise and the Laplacian measure which, regardless of an object shape, has a good performance in calculating the focus measure. And the time-series image, which considers the object shape, is proposed in order to efficiently applying the interesting window. This method, first, divides the image frame by the window. Second, the composition focus measure function be applied to the windows, and the time-series image is constructed. Finally, the 3-D information of an object is reconstructed from the time-series images considering the object shape. The experimental results have shown that the proposed method is suitable algorithm to 3-D reconstruction of an irregular object.

  • PDF

Efficient Generation of Computer-generated Hologram Patterns Using Spatially Redundant Data on a 3D Object and the Novel Look-up Table Method

  • Kim, Seung-Cheol;Kim, Eun-Soo
    • Journal of Information Display
    • /
    • v.10 no.1
    • /
    • pp.6-15
    • /
    • 2009
  • In this paper, a new approach is proposed for the efficient generation of computer-generated holograms (CGHs) using the spatially redundant data on a 3D object and the novel look-up table (N-LUT) method. First, the pre-calculated N-point principle fringe patterns (PFPs) were calculated using the 1-point PFP of the N-LUT. Second, spatially redundant data on a 3D object were extracted and re-grouped into the N-point redundancy map using the run-length encoding (RLE) method. Then CGH patterns were generated using the spatial redundancy map and the N-LUT method. Finally, the generated hologram patterns were reconstructed. In this approach, the object points that were involved in the calculation of the CGH patterns were dramatically reduced, due to which the computational speed was increased. Some experiments with a test 3D object were carried out and the results were compared with those of conventional methods.

Separation of the Occluding Object from the Stack of 3D Objects Using a 2D Image (겹쳐진 3차원 물체의 2차원 영상에서 가리는 물체의 구분기법)

  • 송필재;홍민철;한헌수
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.11-22
    • /
    • 2004
  • Conventional algorithms of separating overlapped objects are mostly based on template matching methods and thus their application domain is restricted to 2D objects and the processing time increases when the number of templates (object models) does. To solve these problems, this paper proposes a new approach of separating the occluding object from the stack of 3D objects using the relationship between surfaces without any information on the objects. The proposed algorithm considers an object as a combination of surfaces which are consisted with a set of boundary edges. Overlap of 3D objects appears as overlap of surfaces and thus as crossings of edges in 2D image. Based on this observation, the types of edge crossings are classified from which the types of overlap of 3D objects can be identified. The relationships between surfaces are represented by an attributed graph where the types of overlaps are represented by relation values. Using the relation values, the surfaces pertained to the same object are discerned and the overlapping object on the top of the stack can be separated. The performance of the proposed algorithm has been proved by the experiments using the overlapped images of 3D objects selected among the standard industrial parts.

Attention based Feature-Fusion Network for 3D Object Detection (3차원 객체 탐지를 위한 어텐션 기반 특징 융합 네트워크)

  • Sang-Hyun Ryoo;Dae-Yeol Kang;Seung-Jun Hwang;Sung-Jun Park;Joong-Hwan Baek
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.2
    • /
    • pp.190-196
    • /
    • 2023
  • Recently, following the development of LIDAR technology which can detect distance from the object, the interest for LIDAR based 3D object detection network is getting higher. Previous networks generate inaccurate localization results due to spatial information loss during voxelization and downsampling. In this study, we propose an attention-based convergence method and a camera-LIDAR convergence system to acquire high-level features and high positional accuracy. First, by introducing the attention method into the Voxel-RCNN structure, which is a grid-based 3D object detection network, the multi-scale sparse 3D convolution feature is effectively fused to improve the performance of 3D object detection. Additionally, we propose the late-fusion mechanism for fusing outcomes in 3D object detection network and 2D object detection network to delete false positive. Comparative experiments with existing algorithms are performed using the KITTI data set, which is widely used in the field of autonomous driving. The proposed method showed performance improvement in both 2D object detection on BEV and 3D object detection. In particular, the precision was improved by about 0.54% for the car moderate class compared to Voxel-RCNN.

CAD-Based 3-D Object Recognition Using the Robust Stereo Vision and Hough Transform (강건 스테레오 비전과 허프 변환을 이용한 캐드 기반 삼차원 물체인식)

  • 송인호;정성종
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.500-503
    • /
    • 1997
  • In this paper, a method for recognizing 3-D objects using the 3-D Hough transform and the robust stereo vision is studied. A 3-D object is recognized through two steps; modeling step and matching step. In modeling step, features of the object are extracted by analyzing the IGES file. In matching step, the values of the sensed image are compared with those of the IGES file which is assumed to location and orientation in the 3-D Hough transform domain. Since we use the 3-D Hough transform domain of the input image directly, the sensitivity to the noise and the high computational complexity could be significantly allcv~ated. Also, the cost efficiency is improved using the robust stereo vision for obtaining depth map image which is needed for 3-D Hough transform. In order lo verify the proposed method, real telephone model is recognized. Thc results of the location and orientation of the model are presented.

  • PDF