• Title/Summary/Keyword: 3D Object Recognition

Search Result 268, Processing Time 0.027 seconds

3D Shape Descriptor for Segmenting Point Cloud Data

  • Park, So Young;Yoo, Eun Jin;Lee, Dong-Cheon;Lee, Yong Wook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.6_2
    • /
    • pp.643-651
    • /
    • 2012
  • Object recognition belongs to high-level processing that is one of the difficult and challenging tasks in computer vision. Digital photogrammetry based on the computer vision paradigm has begun to emerge in the middle of 1980s. However, the ultimate goal of digital photogrammetry - intelligent and autonomous processing of surface reconstruction - is not achieved yet. Object recognition requires a robust shape description about objects. However, most of the shape descriptors aim to apply 2D space for image data. Therefore, such descriptors have to be extended to deal with 3D data such as LiDAR(Light Detection and Ranging) data obtained from ALS(Airborne Laser Scanner) system. This paper introduces extension of chain code to 3D object space with hierarchical approach for segmenting point cloud data. The experiment demonstrates effectiveness and robustness of the proposed method for shape description and point cloud data segmentation. Geometric characteristics of various roof types are well described that will be eventually base for the object modeling. Segmentation accuracy of the simulated data was evaluated by measuring coordinates of the corners on the segmented patch boundaries. The overall RMSE(Root Mean Square Error) is equivalent to the average distance between points, i.e., GSD(Ground Sampling Distance).

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Determination of a holdsite of a curved object using range data

  • Yang, Woo-Suk;Jang, Jong-Whan
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.399-404
    • /
    • 1992
  • Curved 3D objects represented by range data contain large amounts of information compared with planar objects, but do not have distinct features for matching to those of object models. This makes it difficult to represent and identify a general 3D curved object. This paper introduces a new approach to representing and finding a holdsite of general 3D curved objects using range data. We develop a three-dimensional generalized Hough transformation which can be also applied to general 3D curved object recognition and which reduces both the computation time and storage requirements. Our approach makes use of the relative geometric differences between particular points on the object surface and some model points which are prespecified arbitrarily and task dependently.

  • PDF

Image Processing-based Object Recognition Approach for Automatic Operation of Cranes

  • Zhou, Ying;Guo, Hongling;Ma, Ling;Zhang, Zhitian
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.399-408
    • /
    • 2020
  • The construction industry is suffering from aging workers, frequent accidents, as well as low productivity. With the rapid development of information technologies in recent years, automatic construction, especially automatic cranes, is regarded as a promising solution for the above problems and attracting more and more attention. However, in practice, limited by the complexity and dynamics of construction environment, manual inspection which is time-consuming and error-prone is still the only way to recognize the search object for the operation of crane. To solve this problem, an image-processing-based automated object recognition approach is proposed in this paper, which is a fusion of Convolutional-Neutral-Network (CNN)-based and traditional object detections. The search object is firstly extracted from the background by the trained Faster R-CNN. And then through a series of image processing including Canny, Hough and Endpoints clustering analysis, the vertices of the search object can be determined to locate it in 3D space uniquely. Finally, the features (e.g., centroid coordinate, size, and color) of the search object are extracted for further recognition. The approach presented in this paper was implemented in OpenCV, and the prototype was written in Microsoft Visual C++. This proposed approach shows great potential for the automatic operation of crane. Further researches and more extensive field experiments will follow in the future.

  • PDF

3-D Underwater Object Recognition Using Ultrasonic Transducer Fabricated with Porous Piezoelectric Resonator (다공질 압전 초음파 트랜스튜서를 이용한 3차원 수중 물체인식)

  • 조현철;이수호;박정학;사공건
    • Proceedings of the Korean Institute of Electrical and Electronic Material Engineers Conference
    • /
    • 1996.11a
    • /
    • pp.316-319
    • /
    • 1996
  • In this study, characteristics of ultrasonic transducer fabricated with porous piezoelectric resonator are investigated, 3-D underwater object recognition using the self-made ultrasonic transducer and SOFM(Self-Organizing Feature Map) neural network are presented. The self-made transducer was satisfied the required condition of ultrasonic transducer in water, and the recognition rates for the training data and the testing data were 100 and 95.3% respectively. The experimental results have shown that the ultrasonic transducer fabricated with porous piezoelectric resonator could be applied for sonar system.

  • PDF

An Automatic Camera Tracking System for Video Surveillance

  • Lee, Sang-Hwa;Sharma, Siddharth;Lin, Sang-Lin;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2010.07a
    • /
    • pp.42-45
    • /
    • 2010
  • This paper proposes an intelligent video surveillance system for human object tracking. The proposed system integrates the object extraction, human object recognition, face detection, and camera control. First, the object in the video signals is extracted using the background subtraction. Then, the object region is examined whether it is human or not. For this recognition, the region-based shape descriptor, angular radial transform (ART) in MPEG-7, is used to learn and train the shapes of human bodies. When it is decided that the object is human or something to be investigated, the face region is detected. Finally, the face or object region is tracked in the video, and the pan/tilt/zoom (PTZ) controllable camera tracks the moving object with the motion information of the object. This paper performs the simulation with the real CCTV cameras and their communication protocol. According to the experiments, the proposed system is able to track the moving object(human) automatically not only in the image domain but also in the real 3-D space. The proposed system reduces the human supervisors and improves the surveillance efficiency with the computer vision techniques.

  • PDF

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

Characteristics of 3-D Underwater Object Recognition Independent of Translation Using Ultrasonic Sensor Fabricated with Porous Piezoelectric Resonator (다공질 압전소자로 제작한 초음파 센서의 물체변위에 무관한 3차원 수중 물체인식 특성)

  • 조현철;이기성;박정학;이수호;사공건
    • Electrical & Electronic Materials
    • /
    • v.10 no.9
    • /
    • pp.916-921
    • /
    • 1997
  • In this study Characteristics of 3-D underwater object recognition independent of translation using the self-made ultrasonic sensor fabricated with porous piezoelectric resonator and presented. The sensor was satisfied with requirement of ultrasonic sensor. The recognition rates for the training data and the testing data are 97.45 and 91.25[%] respectively using the self-made ultrasonic sensor and SCL(Simple Competitive Learning) neural network. According to the experimental results It is believed that the self-made ultrasonic sensor can be applied as sensor of SONAR system.

  • PDF

The Detection of the Stereo Viewing Points for 3D Object Recognition (2차원 물체 인식을 위한 입체 시각 포인트의 추출)

  • Seo, Choon-Weon;Won, Young-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2007.07a
    • /
    • pp.451-452
    • /
    • 2007
  • It is need to find a new feature for the more statable recognition system. Now, we need more features like a human eyes. Therefore, this paper proposed a new feature with the stereo camera. In this paper, the each different features from the left and right input image will be extracted by stereo vision system, and will be good for the 3-D recognition.

  • PDF

3D Data Dimension Reduction for Efficient Feature Extraction in Posture Recognition (포즈 인식에서 효율적 특징 추출을 위한 3차원 데이터의 차원 축소)

  • Kyoung, Dong-Wuk;Lee, Yun-Li;Jung, Kee-Chul
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.435-448
    • /
    • 2008
  • 3D posture recognition is a solution to overcome the limitation of 2D posture recognition. There are many researches carried out for 3D posture recognition using 3D data. The 3D data consist of massive surface points which are rich of information. However, it is difficult to extract the important features for posture recognition purpose. Meanwhile, it also consumes lots of processing time. In this paper, we introduced a dimension reduction method that transform 3D surface points of an object to 2D data representation in order to overcome the issues of feature extraction and time complexity of 3D posture recognition. For a better feature extraction and matching process, a cylindrical boundary is introduced in meshless parameterization, its offer a fast processing speed of dimension reduction process and the output result is applicable for recognition purpose. The proposed approach is applied to hand and human posture recognition in order to verify the efficiency of the feature extraction.