• Title/Summary/Keyword: object-based 3-D model

Search Result 334, Processing Time 0.029 seconds

Video Augmentation by Image-based Rendering

  • Seo, Yong-Duek;Kim, Seung-Jin;Sang, Hong-Ki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1998.06b
    • /
    • pp.147-153
    • /
    • 1998
  • This paper provides a method for video augmentation using image interpolation. In computer graphics or augmented reality, 3D information of a model object is necessary to generate 2D views of the model, which are then inserted into or overlayed on environmental views or real video frames. However, we do not require any three dimensional model but images of the model object at some locations to render views according to the motion of video camera which is calculated by an SFM algorithm using point matches under weak-perspective (scaled-orthographic) projection model. Thus, a linear view interpolation algorithm is applied rather than a 3D ray-tracing method to get a view of the model at different viewpoints from model views. In order to get novel views in a way that agrees with the camera motion the camera coordinate system is embedded into model coordinate system at initialization time on the basis of 3D information recovered from video images and model views, respectively. During the sequence, motion parameters from video frames are used to compute interpolation parameters, and rendered model views are overlayed on corresponding video frames. Experimental results for real video frames and model views are given. Finally, discussion on the limitations of the method and subjects for future research are provided.

  • PDF

Similarity Search in 3D Object using Minimum Bounding Cover (3D 오브젝트의 외피를 이용한 유사도 검색)

  • Kim, A-Mi;Song, Ju-Hwan;Gwun, Ou-Bong
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.759-760
    • /
    • 2008
  • In this paper, We propose the feature-based 3D model Retrieval System. 3D models are represented as triangle meshes. A first simple feature vector can be calculated from hull. After looking for meshes intersected with the hull, we compute the curvature of meshes. These curvature are used as the model descriptor.

  • PDF

Pointwise CNN for 3D Object Classification on Point Cloud

  • Song, Wei;Liu, Zishu;Tian, Yifei;Fong, Simon
    • Journal of Information Processing Systems
    • /
    • v.17 no.4
    • /
    • pp.787-800
    • /
    • 2021
  • Three-dimensional (3D) object classification tasks using point clouds are widely used in 3D modeling, face recognition, and robotic missions. However, processing raw point clouds directly is problematic for a traditional convolutional network due to the irregular data format of point clouds. This paper proposes a pointwise convolution neural network (CNN) structure that can process point cloud data directly without preprocessing. First, a 2D convolutional layer is introduced to percept coordinate information of each point. Then, multiple 2D convolutional layers and a global max pooling layer are applied to extract global features. Finally, based on the extracted features, fully connected layers predict the class labels of objects. We evaluated the proposed pointwise CNN structure on the ModelNet10 dataset. The proposed structure obtained higher accuracy compared to the existing methods. Experiments using the ModelNet10 dataset also prove that the difference in the point number of point clouds does not significantly influence on the proposed pointwise CNN structure.

A Robust Object Detection and Tracking Method using RGB-D Model (RGB-D 모델을 이용한 강건한 객체 탐지 및 추적 방법)

  • Park, Seohee;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.18 no.4
    • /
    • pp.61-67
    • /
    • 2017
  • Recently, CCTV has been combined with areas such as big data, artificial intelligence, and image analysis to detect various abnormal behaviors and to detect and analyze the overall situation of objects such as people. Image analysis research for this intelligent video surveillance function is progressing actively. However, CCTV images using 2D information generally have limitations such as object misrecognition due to lack of topological information. This problem can be solved by adding the depth information of the object created by using two cameras to the image. In this paper, we perform background modeling using Mixture of Gaussian technique and detect whether there are moving objects by segmenting the foreground from the modeled background. In order to perform the depth information-based segmentation using the RGB information-based segmentation results, stereo-based depth maps are generated using two cameras. Next, the RGB-based segmented region is set as a domain for extracting depth information, and depth-based segmentation is performed within the domain. In order to detect the center point of a robustly segmented object and to track the direction, the movement of the object is tracked by applying the CAMShift technique, which is the most basic object tracking method. From the experiments, we prove the efficiency of the proposed object detection and tracking method using the RGB-D model.

Development and Evaluation of System for 3D Visualization Model of Biological Objects (3차원 생물체 가시화 모델 구축장치 개발 및 성능평가)

  • Hwang, H.;Choi, T. H.;Kim, C. H.;Lee, S. H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.6
    • /
    • pp.545-552
    • /
    • 2001
  • Nondestructive methods such as ultrasonic and magnetic resonance imaging systems have many advantages but still much expensive. And they do not give exact color information and may miss some details. If it is allowed to destruct a biological object to obtain interior and exterior informations, 3D image visualization model from a series of sliced sectional images gives more useful information with relatively low cost. In this paper, a PC based automatic 3D visualization system is presented. The system is composed of three modules. The first module is the handling and image acquisition module. The handling module feeds and slices a cylindrical shape paraffin, which holds a biological object inside the paraffin. And the paraffin is kept being solid by cooling while being handled. The image acquisition modulo captures the sectional image of the object merged into the paraffin consecutively. The second one is the system control and interface module, which controls actuators for feeding, slicing, and image capturing. And the last one is the image processing and visualization module, which processes a series of acquired sectional images and generates a 3D volumetric model. To verify the condition for the uniform slicing, normal directional forces of the cutting edge according to the various cutting angles were measured using a strain gauge and the amount of the sliced chips were weighed and analyzed. Once the 3D model was constructed on the computer, user could manipulate it with various transformation methods such as translation, rotation, and scaling including arbitrary sectional view.

  • PDF

Region-Based 3D Image Registration Technique for TKR (전슬관절치환술을 위한 3차원 영역기반 영상정합 기술)

  • Key, J.H.;Seo, D.C.;Park, H.S.;Youn, I.C.;Lee, M.K.;Yoo, S.K.;Choi, K.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.6
    • /
    • pp.392-401
    • /
    • 2006
  • Image Guided Surgery (IGS) system which has variously tried in medical engineering fields is able to give a surgeon objective information of operation process like decision making and surgical planning. This information is displayed through 3D images which are acquired from image modalities like CT and MRI for pre-operation. The technique of image registration is necessary to construct IGS system. Image registration means that 3D model and the object operated by a surgeon are matched on the common frame. Major techniques of registration in IGS system have been used by recognizing fiducial markers placed on the object. However, this method has been criticized due to additional trauma, its invasive protocol inserting fiducial markers in patient's bone and generating noise data when 2D slice images are acquired by image modality because many markers are made of metal. Therefore, this paper developed shape-based registration technique to improve the limitation of fiducial marker based IGS system. Iterative Closest Points (ICP) algorithm was used to match corresponding points and quaternion based rotation and translation transformation using closed form solution applied to find the optimized cost function of transformation. we assumed that this algorithm were used in Total Knee replacement (TKR) operation. Accordingly, we have developed region-based 3D registration technique based on anatomical landmarks and this registration algorithm was evaluated in a femur model. It was found that region-based algorithm can improve the accuracy in 3D registration.

Fast Digital Hologram Generation Using True 3D Object (실물에 대한 디지털 홀로그램 고속 생성)

  • Kang, Hoon-Jong;Lee, Gang-Sung;Lee, Seung-Hyun
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11B
    • /
    • pp.1283-1288
    • /
    • 2009
  • In general, a 3D computer graphic model is being used to generate a digital hologram as theinput information because the 3D information of an object can be extracted from a 3D model, easily. The 3D information of a real scene can be extracted by using a depth camera. The 3D information, point cloud, corresponding to real scene is extracted from a taken image pair, a gray texture and a depth map, by a depth camera. The extracted point cloud is used to generate a digital hologram as input information. The digital hologram is generated by using the coherent holographic stereogram, which is a fast digital hologram generation algorithm based on segmentation. The generated digital hologram using the taken image pair by a depth camera is reconstructed by the Fresnel approximation. By this method, the digital hologram corresponding to a real scene or a real object could be generated by using the fast digital hologram generation algorithm. Furthermore, experimental results are satisfactory.

Feature Extraction of 3-D Object Using Halftoning Image (Halftoning 영상을 이용한 3차원 특징 추출)

  • Kim, D.N.;Kim, S.Y.;Cho, D.S.
    • Proceedings of the KIEE Conference
    • /
    • 1992.07a
    • /
    • pp.465-467
    • /
    • 1992
  • This paper shows 3D vision system based on halftone image analysis. Any halftone image has its own surface vector normal to surface patch. To classily the given 3D images, all the patch on 3D object are transformed to black/white halftone. First we extract the general learning patterns which represents required slopes and their attributes. And next we propose 3D segmentation by searching intensity, slope and density. Artificial neural network is found to be very suitable in this approach, because it has powerful learning quality and noise tolerant. In this study, 3D shape reconstruct using pyramidian model. Our results are evaluated to enhance the quality.

  • PDF

INTEGRATED CONSTRUCTION PROJECT PLANNING USING 3D INFORMATION MODELS

  • Chang-Su Shim;Kwang-Myong Lee;Deok-Won Kim;Yoon-Bum Lee;Kyoung-Lae Park
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.928-934
    • /
    • 2009
  • Although the evolution and deployment of information technologies will undoubtedly play an important role in the current construction industry, many engineers are still unsure of the economic value of using these technologies. Especially for the planning of a construction project, a collaboration system to utilize the whole resources is a essential tool for the successful outcome. A detailed, authoritative, and readily accessible information model is needed to enable engineers to make cost-effective decisions among established and innovative plan alternatives. Most engineers rely on limited private experiences when they create solutions or design alternatives. Initial planning is crucial for the success of the construction project. Most construction projects are done through collaboration of engineers who have different specialized knowledge. Information technologies can dramatically enhance the performance of the collaboration. For the information delivery, we need a mediator between engineers. Object-based 3-D models are useful for the communication and decision assistance for the intelligent project design. In this paper, basic guidelines for the 3-D design according to different construction processes are suggested. Adequate interoperability of 3-D objects from any CAD system is essential for the collaboration. Basic architectures of geometry models and their information layer were established to enable interoperability for design checks, estimation and simulation. A typical international project for roadway was chosen for the pilot project. 3-D GIS model was created and bridge information models were created considering several requirements for planning and decision making of the project. From the pilot test, the integrated construction project planning using 3-D information models was discussed and several guidelines were suggested.

  • PDF

3D object generation based on the depth information of an active sensor (능동형 센서의 깊이 정보를 이용한 3D 객체 생성)

  • Kim, Sang-Jin;Yoo, Ji-Sang;Lee, Seung-Hyun
    • Journal of the Korea Computer Industry Society
    • /
    • v.7 no.5
    • /
    • pp.455-466
    • /
    • 2006
  • In this paper, 3D objects is created from the real scene that is used by an active sensor, which gets depth and RGB information. To get the depth information, this paper uses the $Zcam^{TM}$ camera which has built-in an active sensor module. <중략> Thirdly, calibrate the detailed parameters and create 3D mesh model from the depth information, then connect the neighborhood points for the perfect 3D mesh model. Finally, the value of color image data is applied to the mesh model, then carries out mapping processing to create 3D object. Experimentally, it has shown that creating 3D objects using the data from the camera with active sensors is possible. Also, this method is easier and more useful than the using 3D range scanner.

  • PDF