• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.023 seconds

Multi-focus 3D Display (다초점 3차원 영상 표시 장치)

  • Kim, Seong-Gyu;Kim, Dong-Uk;Gwon, Yong-Mu;Son, Jeong-Yeong
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2008.07a
    • /
    • pp.119-120
    • /
    • 2008
  • A HMD type multi-focus 3D display system is developed and proof about satisfaction of eye accommodation is tested. Four LEDs(Light Emitting Diode) and a DMD are used to generate four parallax images at single eye and any mechanical part is not included in this system. The multi-focus means the ability of monocular depth cue to various depth levels. By achieving multi-focus function, we developed a 3D display system for only one eye, which can satisfy the accommodation to displayed virtual objects within defined depth. We could achieve a result that focus adjustment is possible at 5 step depths in sequence within 2m depth for only one eye. Additionally, the change level of burring depending on the focusing depth is tested by captured photos and moving pictures of video camera and several subjects. And the HMD type multi-focus 3D display can be applied to a monocular 3D display and monocular AR 3D display.

  • PDF

Implementation of Integrated Player System based on Free-Viewpoint Video Service according to User Selection (사용자 선택에 따른 자유 시점 비디오 서비스 기반의 통합 플레이어 시스템 구현)

  • Yang, Ji-hee;Song, Min-ki;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.265-274
    • /
    • 2020
  • Free-viewpoint video service is a technology that allows users to watch at any angle, location and distance through interaction. In this paper, the free-viewpoint video services are defined in four viewing modes: Inward view, outward view, 3D object view and first person view. And we developed and implemented a new integrated program that plays all the suggested views. In the contents of girl band performances and basketball games, multi-view cameras suitable for each viewing mode are installed to acquire media, and data stored on the server is streamed over the network, making it available for viewing. Users can freely choose four viewing modes, space location, angle and so on, and the media data such as images and sounds are provided to them by rendering appropriately for the selected the viewpoint. Our system is expected to be a scalable free-viewpoint video service player as well as provide users with immersion and presence by combining various viewing modes.

The long-term effect of Interactive Video Game on Cognitive Information Processing the elderly: P300 (장기간의 상호작용적 비디오 게임이 노인의 인지정보처리에 미치는 영향: P300)

  • Kim, Sung-Woon;Kim, Han-Cheol
    • Journal of Digital Convergence
    • /
    • v.18 no.8
    • /
    • pp.493-504
    • /
    • 2020
  • The objectives of this study was to examine the effect of Interactive Video Game on cognitive information processing the elderly. Sixty elderly were attended in this study. Their ages ranged from 65 to 70, with a mean age of 67.60 years. The subjects were randomly assigned to one of three experimental conditions: (1) interactive video game group (n=20), (2) aerobic exercise group (n=20), (3) control group (n=20). The experimental design of this study was analyzed using two-way ANOVAs with repeated measures of groups and time. Cognitive function was assessed by neuroelectrical response, and ERP analysis. The results of the study showed that the interactive video game group and aerobic exercise group showed no significant statistical differences in the response time, response accuracy, amplitude and potential of the performance of the exercise in cognitive function and ERP analysis, but improved the interaction video game group and aerobic exercise (walking) group over the control group. It was concluded that long-term aerobic exercise like interactive video game is associated with attenuation of cognitive decline in the elderly.

A Study of Utilizing 2D Photo Scan Technology to Efficiently Design 3D Models (2D 포토 스캔 기술을 활용한 효율적인 3D 모델링 제작방법 연구)

  • Guo, Dawei;Chung, Jeanhun
    • Journal of Digital Convergence
    • /
    • v.15 no.7
    • /
    • pp.393-400
    • /
    • 2017
  • Generally, in special effect video and 3D animation design process, character and background's 3D model is built by 3D program like MAYA or 3DS MAX. But in that manual modeling mode, model design needs much time and costs much money. In this paper, two experimental groups are set to prove use 2D photo scan modeling mode to build 3D model is effective and advanced. The first experimental group is modeling the same object by different experimental setting. The second experimental group is modeling the same background by different experimental setting. Through those two experimental groups, we try to find an effective design method and matters need attention when we use photo scan design mode. We aim to get the model from whole experiment and prove photo scan modeling mode is effective and advanced.

Hardware Channel Decoder for Holographic WORM Storage (홀로그래픽 WORM의 하드웨어 채널 디코더)

  • Hwang, Eui-Seok;Yoon, Pil-Sang;Kim, Hak-Sun;Park, Joo-Youn
    • Transactions of the Society of Information Storage Systems
    • /
    • v.1 no.2
    • /
    • pp.155-160
    • /
    • 2005
  • In this paper, the channel decoder promising reliable data retrieving in noisy holographic channel has been developed for holographic WORM(write once read many) system. It covers various DSP(digital signal processing) blocks, such as align mark detector, adaptive channel equalizer, modulation decoder and ECC(error correction code) decoder. The specific schemes of DSP are designed to reduce the effect of noises in holographic WORM(H-WORM) system, particularly in prototype of DAEWOO electronics(DEPROTO). For real time data retrieving, the channel decoder is redesigned for FPGA(field programmable gate array) based hardware, where DSP blocks calculate in parallel sense with memory buffers between blocks and controllers for driving peripherals of FPGA. As an input source of the experiments, MPEG2 TS(transport stream) data was used and recorded to DEPROTO system. During retrieving, the CCD(charge coupled device), capturing device of DEPROTO, detects retrieved images and transmits signals of them to the FPGA of hardware channel decoder. Finally, the output data stream of the channel decoder was transferred to the MPEG decoding board for monitoring video signals. The experimental results showed the error corrected BER(bit error rate) of less than $10^{-9}$, from the raw BER of DEPROTO, about $10^{-3}$. With the developed hardware channel decoder, the real-time video demonstration was possible during the experiments. The operating clock of the FPGA was 60 MHz, of which speed was capable of decoding up to 120 mega channel bits per sec.

  • PDF

How to Acquire the Evidence Capability of Video Images Taken by Drone (드론으로 촬영한 영상물의 증거능력 확보방안)

  • Kim, Yong-Jin;Song, Jae-Keun;Lee, Gyu-An
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.1
    • /
    • pp.163-168
    • /
    • 2018
  • With the advent of the fourth industrial revolution era, the use of drone has been progressing rapidly in various fields. Now the drones will be used extensively in the area of investigation. Until now the criminal photographs stayed in 2D digital images, it would be possible to reproduce not only 3D images but also make a crime scene with 3D printer. Firstly, the video images taken by the investigation agency using the drones are digital image evidence, and the requirements for securing the evidence capability are not different from the conditions for obtaining the proof of digital evidence. However, when the drones become a new area of scientific investigation, it is essential to systematize the authenticity of the images taken by the drones so that they can be used as evidence. In this paper, I propose a method to secure the evidence capability of digital images taken by drone.

Bit-plane based Lossless Depth Map Coding Method (비트평면 기반 무손실 깊이정보 맵 부호화 방법)

  • Kim, Kyung-Yong;Park, Gwang-Hoon;Suh, Doug-Young
    • Journal of Broadcast Engineering
    • /
    • v.14 no.5
    • /
    • pp.551-560
    • /
    • 2009
  • This paper proposes a method for efficient lossless depth map coding for MPEG 3D-Video coding. In general, the conventional video coding method such as H.264 has been used for depth map coding. However, the conventional video coding methods do not consider the image characteristics of the depth map. Therefore, as a lossless depth map coding method, this paper proposes a bit-plane based lossless depth mar coding method by using the MPEG-4 Part 2 shape coding scheme. Simulation results show that the proposed method achieves the compression ratios of 28.91:1. In intra-only coding, proposed method reduces the bitrate by 24.84% in comparison with the JPEG-LS scheme, by 39.35% in comparison with the JPEG-2000 scheme, by 30.30% in comparison with the H.264(CAVLC mode) scheme, and by 16.65% in comparison with the H.264(CABAC mode) scheme. In addition, in intra and inter coding the proposed method reduces the bitrate by 36.22% in comparison with the H.264(CAVLC mode) scheme, and by 23.71% in comparison with the 0.264(CABAC mode) scheme.

Transform domain Wyner-Ziv Coding based on the frequency-adaptive channel noise modeling (주파수 적응 채널 잡음 모델링에 기반한 변환영역 Wyner-Ziv 부호화 방법)

  • Kim, Byung-Hee;Ko, Bong-Hyuck;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.14 no.2
    • /
    • pp.144-153
    • /
    • 2009
  • Recently, as the necessity of a light-weighted video encoding technique has been rising for applications such as UCC(User Created Contents) or Multiview Video, Distributed Video Coding(DVC) where a decoder, not an encoder, performs the motion estimation/compensation taking most of computational complexity has been vigorously investigated. Wyner-Ziv coding reconstructs an image by eliminating the noise on side information which is decoder-side prediction of original image using channel code. Generally the side information of Wyner-Ziv coding is generated by using frame interpolation between key frames. The channel code such as Turbo code or LDPC code which shows a performance close to the Shannon's limit is employed. The noise model of Wyner-Ziv coding for channel decoding is called Virtual Channel Noise and is generally modeled by Laplacian or Gaussian distribution. In this paper, we propose a Wyner-Ziv coding method based on the frequency-adaptive channel noise modeling in transform domain. The experimental results with various sequences prove that the proposed method makes the channel noise model more accurate compared to the conventional scheme, resulting in improvement of the rate-distortion performance by up to 0.52dB.

Adaptive Quantization for Transform Domain Wyner-Ziv Residual Coding of Video (변환 영역 Wyner-Ziv 잔차 신호 부호화를 위한 적응적 양자화)

  • Cho, Hyon-Myong;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.98-106
    • /
    • 2011
  • Since prediction processes such as motion estimation motion compensation are not at the WZ video encoder but at its decoder, WZ video compression cannot have better performance than that of conventional video encoder. In order to implement the prediction process with low complexity at the encoder, WZ residual coding was proposed. Instead of original WZ frames, WZ residual coding encodes the residual signal between key frames and WZ frames. Although the proposed WZ residual coding has good performance in pixel domain, it does not have any improvements in transform domain compared to transform domain WZ coding. The WZ residual coding in transform domain is difficult to have better performance, because pre-defined quantization matrices in WZ coding are not compatible with WZ residual coding. In this paper, we propose a new quantization method modifying quantization matrix and quantization step size adaptively for transform domain WZ residual coding. Experimental result shows 22% gain in BDBR and 1.2dB gain in BDPSNR.

A Frame-based Coding Mode Decision for Temporally Active Video Sequence in Distributed Video Coding (분산비디오부호화에서 동적비디오에 적합한 프레임별 모드 결정)

  • Hoangvan, Xiem;Park, Jong-Bin;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.3
    • /
    • pp.510-519
    • /
    • 2011
  • Intra mode decision is a useful coding tool in Distributed Video Coding (DVC) for improving DVC coding efficiency for video sequences having fast motion. A major limitation associated with the existing intra mode decision methods, however, is that its efficiency highly depends on user-specified thresholds or modeling parameters. This paper proposes an entropy-based method to address this problem. The probabilities of intra and Wyner?Ziv (WZ) modes are determined firstly by examining correlation of pixels in spatial and temporal directions. Based on these probabilities, entropy of the intra and the WZ modes are computed. A comparison based on the entropy values decides a coding mode between intra coding and WZ coding without relying on any user-specified thresholds or modeling parameters. Experimental results show its superior rate-distortion performance of improvements of PSNR up to 2 dB against a conventional Wyner?Ziv coding without intra mode decision. Furthermore, since the proposed method does not require any thresholds or modeling parameters from users, it is very attractive for real life applications.