• Title/Summary/Keyword: 3D object view

Search Result 181, Processing Time 0.021 seconds

A Shadow Mapping Technique Separating Static and Dynamic Objects in Games using Multiple Render Targets (다중 렌더 타겟을 사용하여 정적 및 동적 오브젝트를 분리한 게임용 그림자 매핑 기법)

  • Lee, Dongryul;Kim, Youngsik
    • Journal of Korea Game Society
    • /
    • v.15 no.5
    • /
    • pp.99-108
    • /
    • 2015
  • To identify the location of the object and improve the realism in 3D game, shadow mapping is widely used to compute the depth values of vertices in view of the light position. Since the depth value of the shadow map is calculated by the world coordinate, the depth values of the static object don't need to be updated. In this paper, (1) in order to improve the rendering speed, using multiple render targets the depth values of static objects stored only once are separated from those of dynamic objects stored each time. And (2) in order to improve the shadow quality in the quarter view 3D game, the position of the light is located close to dynamic objects traveled along the camera each time. The effectiveness of the proposed method is verified by the experiments according to the different static and dynamics object configuration in 3D game.

EFFICIENT MULTIVIEW VIDEO CODING BY OBJECT SEGMENTATION

  • Boonthep, Narasak;Chiracharit, Werapon;Chamnongthai, Kosin;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.294-297
    • /
    • 2009
  • Multi-view video consists of a set of multiple video sequences from multiple viewpoints or view directions in the same scene. It contains extremely a large amount of data and some extra information to be stored or transmitted to the user. This paper presents inter-view correlations among video objects and the background to reduce the prediction complexity while achieving a high coding efficiency in multi-view video coding. Our proposed algorism is based on object-based segmentation scheme that utilizes video object information obtained from the coded base view. This set of data help us to predict disparity vectors and motion vectors in enhancement views by employing object registration, which leads to high compression and low-complexity coding scheme for enhancement views. An experimental results show that the superiority can provide an improvement of PSNR gain 2.5.3 dB compared to the simulcast.

  • PDF

3-D Object Recognition Using Surface Normal Images (면 법선 영상을 이용한 3차원 물체 인식)

  • 박종훈;장태규;최종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.28B no.9
    • /
    • pp.727-738
    • /
    • 1991
  • This paper presents a new approach to explicityly use surface normal images (SNIs) in 3-D object model description and recognition procedure. The surface normal images of an object are defined as the projected images obtained from view angles facing normal to each surface of the object. The proposed approach can significantly alleviate the difficulty of obtaining correspondence between models and scene objects by explicitly providing a transform for the matching. The proposed approach is applied to the construction of a model-based 3-D object recognition system for the selected five objects. Synthetic images are used in the experiment to show the operation of the overall recognition system.

  • PDF

3D Object Recognition and Accurate Pose Calculation Using a Neural Network (인공신경망을 이용한 삼차원 물체의 인식과 정확한 자세계산)

  • Park, Gang
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.23 no.11 s.170
    • /
    • pp.1929-1939
    • /
    • 1999
  • This paper presents a neural network approach, which was named PRONET, to 3D object recognition and pose calculation. 3D objects are represented using a set of centroidal profile patterns that describe the boundary of the 2D views taken from evenly distributed view points. PRONET consists of the training stage and the execution stage. In the training stage, a three-layer feed-forward neural network is trained with the centroidal profile patterns using an error back-propagation method. In the execution stage, by matching a centroidal profile pattern of the given image with the best fitting centroidal profile pattern using the neural network, the identity and approximate orientation of the real object, such as a workpiece in arbitrary pose, are obtained. In the matching procedure, line-to-line correspondence between image features and 3D CAD features are also obtained. An iterative model posing method then calculates the more exact pose of the object based on initial orientation and correspondence.

Multi-View Image Deblurring for 3D Shape Reconstruction (3차원 형상 복원을 위한 다중시점 영상 디블러링)

  • Choi, Ho Yeol;Park, In Kyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.11
    • /
    • pp.47-55
    • /
    • 2012
  • In this paper, we propose a method to reconstruct accurate 3D shape object by using multi-view images which are disturbed by motion blur. In multi-view deblurring, more precise PSF estimation can be done by using the geometric relationship between multi-view images. The proposed method first estimates initial 2D PSFs from individual input images. Then 3D PSF candidates are projected on the input images one by one to find the best one which are mostly consistent with the initial 2D PSFs. 3D PSF consists with direction and density and it represents the 3D trajectory of object motion. 야to restore 3D shape by using multi-view images computes the similarity map and estimates the position of 3D point. The estimated 3D PSF is again projected to input images and they replaces the intial 2D PSFs which are finally used in image deblurring. Experimental result shows that the quality of image deblurring and 3D reconstruction improves significantly compared with the result when the input images are independently deblurred.

PointNet and RandLA-Net Algorithms for Object Detection Using 3D Point Clouds (3차원 포인트 클라우드 데이터를 활용한 객체 탐지 기법인 PointNet과 RandLA-Net)

  • Lee, Dong-Kun;Ji, Seung-Hwan;Park, Bon-Yeong
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.59 no.5
    • /
    • pp.330-337
    • /
    • 2022
  • Research on object detection algorithms using 2D data has already progressed to the level of commercialization and is being applied to various manufacturing industries. Object detection technology using 2D data has an effective advantage, there are technical limitations to accurate data generation and analysis. Since 2D data is two-axis data without a sense of depth, ambiguity arises when approached from a practical point of view. Advanced countries such as the United States are leading 3D data collection and research using 3D laser scanners. Existing processing and detection algorithms such as ICP and RANSAC show high accuracy, but are used as a processing speed problem in the processing of large-scale point cloud data. In this study, PointNet a representative technique for detecting objects using widely used 3D point cloud data is analyzed and described. And RandLA-Net, which overcomes the limitations of PointNet's performance and object prediction accuracy, is described a review of detection technology using point cloud data was conducted.

A Study on the 3D Video Generation Technique using Multi-view and Depth Camera (다시점 카메라 및 depth 카메라를 이용한 3 차원 비디오 생성 기술 연구)

  • Um, Gi-Mun;Chang, Eun-Young;Hur, Nam-Ho;Lee, Soo-In
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.549-552
    • /
    • 2005
  • This paper presents a 3D video content generation technique and system that uses the multi-view images and the depth map. The proposed uses 3-view video and depth inputs from the 3-view video camera and depth camera for the 3D video content production. Each camera is calibrated using Tsai's calibration method, and its parameters are used to rectify multi-view images for the multi-view stereo matching. The depth and disparity maps for the center-view are obtained from both the depth camera and the multi-view stereo matching technique. These two maps are fused to obtain more reliable depth map. Obtained depth map is not only used to insert a virtual object to the scene based on the depth key, but is also used to synthesize virtual viewpoint images. Some preliminary test results are given to show the functionality of the proposed technique.

  • PDF

Object Recognition-based Global Localization for Mobile Robots (이동로봇의 물체인식 기반 전역적 자기위치 추정)

  • Park, Soon-Yyong;Park, Mignon;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.1
    • /
    • pp.33-41
    • /
    • 2008
  • Based on object recognition technology, we present a new global localization method for robot navigation. For doing this, we model any indoor environment using the following visual cues with a stereo camera; view-based image features for object recognition and those 3D positions for object pose estimation. Also, we use the depth information at the horizontal centerline in image where optical axis passes through, which is similar to the data of the 2D laser range finder. Therefore, we can build a hybrid local node for a topological map that is composed of an indoor environment metric map and an object location map. Based on such modeling, we suggest a coarse-to-fine strategy for estimating the global localization of a mobile robot. The coarse pose is obtained by means of object recognition and SVD based least-squares fitting, and then its refined pose is estimated with a particle filtering algorithm. With real experiments, we show that the proposed method can be an effective vision- based global localization algorithm.

  • PDF

An Approach to 3D Object Localization Based on Monocular Vision

  • Jung, Sung-Hoon;Jang, Do-Won;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.12
    • /
    • pp.1658-1667
    • /
    • 2008
  • Reconstruction of 3D objects from a single view image is generally an ill-posed problem because of the projection distortion. A monocular vision based 3D object localization method is proposed in this paper, which approximates an object on the ground to a simple bounding solid and works automatically without any prior information about the object. A spherical or cylindrical object determined based on a circularity measure is approximated to a bounding cylinder, while the other general free-shaped objects to a bounding box or a bounding cylinder appropriately. For a general object, its silhouette on the ground is first computed by back-projecting its projected image in image plane onto the ground plane and then a base rectangle on the ground is determined by using the intuition that touched parts of the object on the ground should appear at lower part of the silhouette. The base rectangle is adjusted and extended until a derived bounding box from it can enclose the general object sufficiently. Height of the bounding box is also determined enough to enclose the general object. When the general object looks like a round-shaped object, a bounding cylinder that encloses the bounding box minimally is selected instead of the bounding box. A bounding solid can be utilized to localize a 3D object on the ground and to roughly estimate its volume. Usefulness of our approach is presented with experimental results on real image objects and limitations of our approach are discussed.

  • PDF

Experimental results on Shape Reconstruction of Underwater Object Using Imaging Sonar (영상 소나를 이용한 수중 물체 외형 복원에 관한 기초 실험)

  • Lee, Yeongjun;Kim, Taejin;Choi, Jinwoo;Choi, Hyun-Taek
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.10
    • /
    • pp.116-122
    • /
    • 2016
  • This paper proposes a practical object shape reconstruction method using an underwater imaging sonar. In order to reconstruct the object shape, three methods are utilized. Firstly, the vertical field of view of imaging sonar is modified to narrow angle to reduce an uncertainty of estimated 3D position. The wide vertical field of view makes the incorrect estimation result about the 3D position of the underwater object. Secondly, simple noise filtering and range detection methods are designed to extract a distance from the sonar image. Lastly, a low pass filter is adopted to estimate a probability of voxel occupancy. To demonstrate the proposed methods, object shape reconstruction for three sample objects was performed in a basin and results are explained.