• Title/Summary/Keyword: 3D방송

Search Result 1,332, Processing Time 0.028 seconds

Selecting Representative Views of 3D Objects By Affinity Propagation for Retrieval and Classification (검색과 분류를 위한 친근도 전파 기반 3차원 모델의 특징적 시점 추출 기법)

  • Lee, Soo-Chahn;Park, Sang-Hyun;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.828-837
    • /
    • 2008
  • We propose a method to select representative views of single objects and classes of objects for 3D object retrieval and classification. Our method is based on projected 2D shapes, or views, of the 3D objects, where the representative views are selected by applying affinity propagation to cluster uniformly sampled views. Affinity propagation assigns prototypes to each cluster during the clustering process, thereby providing a natural criterion to select views. We recursively apply affinity propagation to the selected views of objects classified as single classes to obtain representative views of classes of objects. By enabling classification as well as retrieval, effective management of large scale databases for retrieval can be enhanced, since we can avoid exhaustive search over all objects by first classifying the object. We demonstrate the effectiveness of the proposed method for both retrieval and classification by experimental results based on the Princeton benchmark database [16].

Multi-Depth Map Fusion Technique from Depth Camera and Multi-View Images (깊이정보 카메라 및 다시점 영상으로부터의 다중깊이맵 융합기법)

  • 엄기문;안충현;이수인;김강연;이관행
    • Journal of Broadcast Engineering
    • /
    • v.9 no.3
    • /
    • pp.185-195
    • /
    • 2004
  • This paper presents a multi-depth map fusion method for the 3D scene reconstruction. It fuses depth maps obtained from the stereo matching technique and the depth camera. Traditional stereo matching techniques that estimate disparities between two images often produce inaccurate depth map because of occlusion and homogeneous area. Depth map obtained from the depth camera is globally accurate but noisy and provide a limited depth range. In order to get better depth estimates than these two conventional techniques, we propose a depth map fusion method that fuses the multi-depth maps from stereo matching and the depth camera. We first obtain two depth maps generated from the stereo matching of 3-view images. Moreover, a depth map is obtained from the depth camera for the center-view image. After preprocessing each depth map, we select a depth value for each pixel among them. Simulation results showed a few improvements in some background legions by proposed fusion technique.

Real-time Markerless Facial Motion Capture of Personalized 3D Real Human Research

  • Hou, Zheng-Dong;Kim, Ki-Hong;Lee, David-Junesok;Zhang, Gao-He
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.129-135
    • /
    • 2022
  • Real human digital models appear more and more frequently in VR/AR application scenarios, in which real-time markerless face capture animation of personalized virtual human faces is an important research topic. The traditional way to achieve personalized real human facial animation requires multiple mature animation staff, and in practice, the complex process and difficult technology may bring obstacles to inexperienced users. This paper proposes a new process to solve this kind of work, which has the advantages of low cost and less time than the traditional production method. For the personalized real human face model obtained by 3D reconstruction technology, first, use R3ds Wrap to topology the model, then use Avatary to make 52 Blend-Shape model files suitable for AR-Kit, and finally realize real-time markerless face capture 3D real human on the UE4 platform facial motion capture, this study makes rational use of the advantages of software and proposes a more efficient workflow for real-time markerless facial motion capture of personalized 3D real human models, The process ideas proposed in this paper can be helpful for other scholars who study this kind of work.

Analysis of Relationship between Objective Performance Measurement and 3D Visual Discomfort in Depth Map Upsampling (깊이맵 업샘플링 방법의 객관적 성능 측정과 3D 시각적 피로도의 관계 분석)

  • Gil, Jong In;Mahmoudpour, Saeed;Kim, Manbae
    • Journal of Broadcast Engineering
    • /
    • v.19 no.1
    • /
    • pp.31-43
    • /
    • 2014
  • A depth map is an important component for stereoscopic image generation. Since the depth map acquired from a depth camera has a low resolution, upsamling a low-resolution depth map to a high-resolution one has been studied past decades. Upsampling methods are evaluated by objective evaluation tools such as PSNR, Sharpness Degree, Blur Metric. As well, the subjective quality is compared using virtual views generated by DIBR (depth image based rendering). However, works on the analysis of the relation between depth map upsampling and stereoscopic images are relatively few. In this paper, we investigate the relationship between subjective evaluation of stereoscopic images and objective performance of upsampling methods using cross correlation and linear regression. Experimental results demonstrate that the correlation of edge PSNR and visual fatigue is the highest and the blur metric has lowest correlation. Further, from the linear regression, we found relative weights of objective measurements. Further we introduce a formulae that can estimate 3D performance of conventional or new upsampling methods.

3D world space recognition system using stereo camera (스테레오 카메라를 이용한 3차원 공간 인식 시스템)

  • Lee, Dong-Seok;Kim, Su-Dong;Lee, Dong-Wook;Yoo, Ji-Sang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2008.11a
    • /
    • pp.215-218
    • /
    • 2008
  • 본 논문에서는 스테레오 카메라로부터 획득된 좌, 우 영상의 변이를 추정하여 3차원 공간 좌표(x, y, z)를 얻어내고, 거리측정과 가상공간 제어를 통해 사용자에게 현실감을 제공하는 실시간 3차원 공간 인식 시스템을 제안한다. 스테레오 카메라로 부터 획득된 좌, 우 영상은 시점의 차이 때문에 동일 물체에 대한 좌, 우 영상의 좌표 값의 차이를 발생시키는 데 이를 변이(disparity)라 정의한다. 관심 영역의 변이를 추정할 때 일반적으로 관심 영역의 모든 화소(pixel)의 변이를 추정하지만, 제안한 알고리즘에서는 관심 영역의 2차원 중심 좌표(x, y)의 변이만을 추정하여 계산량을 줄이고 실시간 처리가 가능하도록 하였다. 카메라 파라미터를 이용하여 획득된 변이로부터 깊이 정보(depth)를 얻어내고 3차원 공간 좌표를 획득한다. 손을 관심 영역으로 설정한 시스템에서 3차원 공간 좌표는 실시간으로 사용자의 손의 움직임에 의해 획득되고, 가상공간(virtual space)에 적용되어 사용자가 가상공간을 조작할 수 있는 듯한 느낌을 준다. 실험을 통해 제안한 알고리즘이 1.5m 거리 내에서의 깊이 측정시 평균 0.68cm의 오차를 가짐을 확인 할 수 있었다.

  • PDF

Depth compression method for 3D video (3차원 영상을 위한 깊이 영상 압축 방법)

  • Nam, Jung-Hak;Hwang, Neung-Joo;Cho, Gwang-Shin;Sim, Dong-Gyu;Lee, Soo-Youn;Bang, Gun;Hur, Nam-Ho
    • Journal of Broadcast Engineering
    • /
    • v.15 no.5
    • /
    • pp.703-706
    • /
    • 2010
  • Recently, a need to encode a depth image has been raising with the deployment of 3D video services. The 3DV/FTV group in the MPEG has standardized the compression method of depth map image. Because conventional depth map coding methods are independently encoded without referencing the color image, coding performance of conventional algorithms is poor. In this letter, we proposed a novel method which rearranged modes of depth blocks according to modes of corresponding color blocks by using a correlation between color and depth images. In experimental results, the proposed method achieves bits reduction of 2.2% compared with coding method based on JSVM.

AR Tourism Service Framework Using YOLOv3 Object Detection (YOLOv3 객체 검출을 이용한 AR 관광 서비스 프레임워크)

  • Kim, In-Seon;Jeong, Chi-Seo;Jung, Kye-Dong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.1
    • /
    • pp.195-200
    • /
    • 2021
  • With the development of transportation and mobiles demand for tourism travel is increasing and related industries are also developing significantly. The combination of augmented reality and tourism contents one of the areas of digital media technology, is also actively being studied, and artificial intelligence is already combined with the tourism industry in various directions, enriching tourists' travel experiences. In this paper, we propose a system that scans miniature models produced by reducing tourist areas, finds the relevant tourist sites based on models learned using deep learning in advance, and provides relevant information and 3D models as AR services. Because model learning and object detection are carried out using YOLOv3 neural networks, one of various deep learning neural networks, object detection can be performed at a fast rate to provide real-time service.

A wavelet-based fast motion estimation (웨이블릿 기반의 고속 움직임 예측 기법)

  • 배진우;선동우;유지상
    • Journal of Broadcast Engineering
    • /
    • v.8 no.3
    • /
    • pp.297-305
    • /
    • 2003
  • In this paper, we propose a wavelet based fast motion estimation algorithm for video sequence encoding with very low bit-rate. By using one of properties oi wavelet transform, multi-resolution analysis(MRA) property and spatial Interpolation of an image, we are able to reduce both prediction error and computational complexity at the same time. Especially, by defining a significant block(SB) based on the differential information of wavelet coefficients between successive frames, the proposed algorithm makes up a defect of multi-resolution motion estimation(MRME) algorithm of increasing the number of motion vectors. As experimental results. we can reduce the computational load up to 70% but also improve PSNR up to about 0.1 ∼ 1.2 dB comparing with the MRME algorithm.

Analysis on the Impact of UWB Sensor on Broadband Wireless Communication System (UWB 센서에 의한 광대역 무선 시스템의 간섭 영향 분석)

  • Cheng, Yan-Ming;Lee, Il-Kyoo;Lee, Yong-Woo;Oh, Seung-Hyeub;Cha, Jae-Sang
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.3
    • /
    • pp.83-89
    • /
    • 2010
  • This paper presents the impacts of Ultra Wide-Band(UWB) sensor using frequency of 4.5 GHz on Broadband Wireless communication system which uses frequency of 4.5 GHz. The Minimum Coupling Loss (MCL) method and Spectrum Engineering Advanced Monte Carlo Analysis Tool (SEAMCAT) is used to evaluate the interference effects of UWB sensor on Broadband Wireless communication system, respectively. The minimum protection distance between single UWB sensor and mobile station of Broadband Wireless communication system should be more than 1.2 m to guarantee the co-existence. In case of multiple UWB sensors, UWB transmitting PSD of around -68.5 dBm/MHz below should be required to guarantee interference probability of 5% below for mobile station of Broadband Wireless communication system.

Synthesis Method for Stereoscopic Still Pictures and Moving Pictures (실사 양안식 정지영상 및 동영상 콘텐츠 지원을 위한 합성 방법 연구)

  • Lee Injae;Jeong Seyoon;Kim Kyuheon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2003.11a
    • /
    • pp.153-156
    • /
    • 2003
  • As there is a growing tendency to represent the 3D content instead of the 2D content, researches for the stereoscopic image and video are under way in a variety of fields such as acquisition compression, transmission, authoring and display. The authoring technique for stereoscopic contents has given emphasis to virtual stereoscopic contents. Thus the authoring technique for stereoscopic pictures is insufficient. When we compose a stereo scene with stereoscopic pictures, stereoscopic contents may not match the stereo scene because each stereoscopic picture may have different camera condition. To solve this problem, stereoscopic pictures have been modified manually. It is a laborious work and will be spent much time. Also it is difficult for a user who does not have an elementary knowledge of stereopsis. In this paper, we propose the synthesis method to compose a natural stereo scene with stereoscopic still pictures and moving pictures. Experimental results show that the proposed method in this paper allows a user to synthesize stereoscopic contents easily and compose a stereo scene conveniently.

  • PDF