• Title/Summary/Keyword: 3D View

Search Result 1,623, Processing Time 0.033 seconds

Implementation of an User Interface Developing Tool for 3D Simulator (3차원 시뮬레이터의 사용자 인터페이스 개발 도구 구현)

  • Yoon, Ga-Rim;Jeon, Jun-Young;Kim, Young-Bong
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.2
    • /
    • pp.504-511
    • /
    • 2016
  • 3D simulation programs or games on a smart phone and a personal computer have often employed 3D graphic processing techniques and 3D graphical views. However, the user interfaces in those 3D programs have sticked to take a typical 2D style user interface and thus the combination of a 2D user interface view and a 3D simulation view give us a mismatched sense. Since a 2D user interface has been based on the windows controls, it causes sometime DC conflicts between a simulation view and an interface view. Therefore, we will implement the UI developing tool which can be inserted into the pipeline structure for the development of a 3D simulation software and also follows the view-handler design pattern in Microsoft windows system. It will provide various graphical effects such as the deformation of UI depending on the view direction of simulation view and the sitting pose of user. This developing tool gives the natural user interface which heightens the sense of unity with a given 3D simulation view.

3DTV System Adaptive to User's Environment (사용자 환경에 적응적인 3DTV 시스템)

  • Baek, Yun-Ki;Choi, Mi-Nam;Park, Se-Whan;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.10C
    • /
    • pp.982-989
    • /
    • 2007
  • In this paper, we propose a 3DTV system that considers user's view point and display environment. The proposed system consists of 3 parts - multi-view encoder/decoder, face-tracker, and 2D/3D converter. The proposed system try to encode multi-view sequence and decode it in accordance with the user's view point and it also gives a stereopsis to the multi-view image by using of 2D/3D conversion which converts decoded two-dimensional(2D) image to three-dimensional(3D) image. Experimental results shows that we are able to correctly reconstruct a stereoscopic view that is exactly corresponding to user's view point.

Coding Technique using Depth Map in 3D Scalable Video Codec (확장된 스케일러블 비디오 코덱에서 깊이 영상 정보를 활용한 부호화 기법)

  • Lee, Jae-Yung;Lee, Min-Ho;Chae, Jin-Kee;Kim, Jae-Gon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.237-251
    • /
    • 2016
  • The conventional 3D-HEVC uses the depth data of the other view instead of that of the current view because the texture data has to be encoded before the corresponding depth data of the current view has been encoded, where the depth data of the other view is used as the predicted depth for the current view. Whereas the conventional 3D-HEVC has no other candidate for the predicted depth information except for that of the other view, the scalable 3D-HEVC utilizes the depth data of the lower spatial layer whose view ID is equal to that of the current picture. The depth data of the lower spatial layer is up-scaled to the resolution of the current picture, and then the enlarged depth data is used as the predicted depth information. Because the quality of the enlarged depth is much higher than that of the depth of the other view, the proposed scheme increases the coding efficiency of the scalable 3D-HEVC codec. Computer simulation results show that the scalable 3D-HEVC is useful and the proposed scheme to use the enlarged depth data for the current picture provides the significant coding gain.

Web-based Real-time 3D Video Communication System for Reality Teleconferencing

  • Ko, Jung-Hwan;Kim, Dong-Kyu;Hwang, Dong-Chun;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2005.07b
    • /
    • pp.1611-1614
    • /
    • 2005
  • In this paper, a new multi-view 3D video communication system for real-time Reality teleconferencing application is proposed by usin gthe IEEE 1394 digital cameras, Intel Xeon server computer system and Microsoft's DirectShow programming library and its performance is analyzed in terms of image-grabbing frame rate and number of views. The captured two-view image data is compressed by extraction of disparity data between them and transmitted to another client system through the communication network, in which multi-view could be synthesized with this received 2-view data using the intermediate view reconstruction technique and displayed on the multi-view 3D display system. From some experimental results, it is found that the proposed system can display 16-view 3D images with a gray of 8bits and a frame rate of 15fps.

  • PDF

Adaptive Spatio-Temporal Prediction for Multi-view Coding in 3D-Video (3차원 비디오 압축에서의 다시점 부호화를 위한 적응적 시공간적 예측 부호화)

  • 성우철;이영렬
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.214-224
    • /
    • 2004
  • In this paper, an adaptive spatio-temporal predictive coding based on the H.264 is proposed for 3D immersive media encoding, such as 3D image processing, 3DTV, and 3D videoconferencing. First, we propose a spatio-temporal predictive coding using the same view and inter-view images for the two TPPP, IBBP GOP (group of picture) structures 4hat are different from the conventional simulcast method. Second, an 2D inter-view direct mode for the efficient prediction is proposed when the proposed spatio-temporal prediction uses the IBBP structure. The 2D inter-view direct mode is applied when the temporal direct mode in B(hi-Predictive) picture of the H.264 refers to an inter-view image, since the current temporal direct mode in the H.264 standard could no: be applied to the inter-view image. The proposed method is compared to the conventional simulcast method in terms of PSNR (peak signal to noise ratio) for the various 3D test video sequences. The proposed method shows better PSNR results than the conventional simulcast mode.

Multi-view Semi-supervised Learning-based 3D Human Pose Estimation (다시점 준지도 학습 기반 3차원 휴먼 자세 추정)

  • Kim, Do Yeop;Chang, Ju Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.174-184
    • /
    • 2022
  • 3D human pose estimation models can be classified into a multi-view model and a single-view model. In general, the multi-view model shows superior pose estimation performance compared to the single-view model. In the case of the single-view model, the improvement of the 3D pose estimation performance requires a large amount of training data. However, it is not easy to obtain annotations for training 3D pose estimation models. To address this problem, we propose a method to generate pseudo ground-truths of multi-view human pose data from a multi-view model and exploit the resultant pseudo ground-truths to train a single-view model. In addition, we propose a multi-view consistency loss function that considers the consistency of poses estimated from multi-view images, showing that the proposed loss helps the effective training of single-view models. Experiments using Human3.6M and MPI-INF-3DHP datasets show that the proposed method is effective for training single-view 3D human pose estimation models.

A Study on FOV for developing 3D Game Contents (3D게임 콘텐츠 개발을 위한 시야각(FOV) 연구)

  • Lee, Hwan-joong;Kim, young-bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.163-168
    • /
    • 2009
  • Since 3D gamers can control points of view and field of view freely, 3D games generate high realities and immersions. On most of 3D games, a point of view and a field of view are determined by positions and FOV(field of view) of camera. Although FOV is a simple technical factor, it leads to graphics distortion and affects immersion of the game and causes some gamers to get physical sickness. Therefore, we will suggest the instruction to operate FOVs in many 3D games by examining and analysing various rendering results with control of FOV and real cases of FOV in published games with same 3D modeling environments.

  • PDF

A Study on 3D View Design of Images and Voices Integration for Effective Information Transfer (효과적 정보전달을 위한 영상정보의 3D 뷰 및 음성정보와의 융합 연구)

  • Shin, C.H.;Lee, J.S.
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.1B
    • /
    • pp.35-41
    • /
    • 2010
  • In this paper, we propose a 3D view design scheme which arranges 2D information in a 3D virtual space with a flexible interface and voice information. The scheme allows the user interface of the 2D image in 3D virtual space anytime from any view point. Voice information can be easily attached. It is this simple and efficient image and voice information arrangement in 3D virtual space that improves information transfer.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

3D Contents Based Work Process Simulation Development (3D 콘텐츠 기반 작업 프로세스 시뮬레이션 개발)

  • Kim, Gui-Jung;Han, Jung-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.7
    • /
    • pp.30-37
    • /
    • 2011
  • In this paper we implemented 3D contents based work process simulation for 3D view contents. For this the method of 3D view technique is explained. The automobiles and PC assembly processes according to the virtual scenario showed the technique which assists workers through 3D view. Also for 3D information visualization, max script of contents modeling functions using 3D MAX was developed. The functions are designed to customize coordinate, material edit on modeling, rendering, and 3D object files with max script.