• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.027 seconds

Digital Holographic Display System with Large Screen Based on Viewing Window Movement for 3D Video Service

  • Park, Minsik;Chae, Byung Gyu;Kim, Hyun-Eui;Hahn, Joonku;Kim, Hwi;Park, Cheong Hee;Moon, Kyungae;Kim, Jinwoong
    • ETRI Journal
    • /
    • v.36 no.2
    • /
    • pp.232-241
    • /
    • 2014
  • A holographic display system with a 22-inch LCD panel is developed to provide a wide viewing angle and large holographic 3D image. It is realized by steering a narrow viewing window resulting from a very large pixel pitch compared to the wave length of the laser light. Point light sources and a lens array make it possible to arbitrarily control the position of the viewing window for a moving observer. The holographic display provides both eyes of the observer with a holographic 3D image using two vertically placed LCD panels and a beam splitter to support the holographic stereogram.

Producing a Virtual Object with Realistic Motion for a Mixed Reality Space

  • Daisuke Hirohashi;Tan, Joo-Kooi;Kim, Hyoung-Seop;Seiji Ishikawa
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.153.2-153
    • /
    • 2001
  • A technique is described for producing a virtual object with realistic motion. A 3-D human motion model is obtained by applying a developed motion capturing technique to a real human in motion. Factorization method is a technique for recovering 3-D shape of a rigid object from a single video image stream without using camera parameters. The technique is extended for recovering 3-D human motions. The proposed system is composed of three fixed cameras which take video images of a human motion. Three obtained image sequences are analyzed to yield measurement matrices at individual sampling times, and they are merged into a single measurement matrix to which the factorization is applied and the 3-D human motion is recovered ...

  • PDF

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF

2D/3D conversion algorithm on broadcast and mobile environment and the platform (방송 및 모바일 실감형 2D/3D 컨텐츠 변환 방법 및 플랫폼)

  • Song, Hyok;Bae, Jin-Woo;Yoo, Ji-Sang;Choi, Byeoung-Ho
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2007.08a
    • /
    • pp.386-389
    • /
    • 2007
  • TV technology started from black and white TV. Color TV invented and users request more realistic TV technology. The next technology is 3DTV. For 3DTV, 3D display technology, 3D coding technology, digital mux/demux technology in broadcast and 3D video acquisition are needed. Moreover, Almost every contents now exist are 2D contents. It causes necessity to convert from 2D to 3D. This article describes 2D/3D conversion algorithm and H/W platform on FPGA board. Time difference makes 3D effect and convolution filter increased the effect. Distorted image and original image give 3D effect. The algorithm is shown on 3D display. The display device shows 3D effect by parallax barrier method and has FPGA board.

  • PDF

A Study on the Development for 3D Audio Generation Machine

  • Kim Sung-Eun;Kim Myong-Hee;Park Man-Gon
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.807-813
    • /
    • 2005
  • The production and authoring of digital multimedia contents are most important fields in multimedia technology. Nowadays web-based technology and related multimedia software technology are growing in the IT industry and these technologies are evolving most rapidly in our life. The technology of digital audio and video processing is utilizing rapidly to improve quality of our life, Also we are more interested in high sense and artistic feeling in the music and entertainment areas by use of three dimensional (3D) digital sound technology continuously as well as 3D digital video technology. The service field of digital audio contents is increasing rapidly through the Internet. And the society of Internet users wants the audio contents service with better quality. Recently Internet users are not satisfying the sound quality with 2 channels stereo but seeking the high quality of sound with 5,] channels such as 3D audio of the movie films. But it might be needed proper hardware equipments for the service of 3D sound to satisfy this demand. In this paper, we expand the simple 3D audio generator developed and propose a web-based music bank by the software development of 3D audio generation player in 3D sound environment with two speakers minimizing hardware equipments, Also we believe that this study would contribute greatly to digital 3D sound service of high quality for music and entertainment mania.

  • PDF

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

Fast Extraction of Objects of Interest from Images with Low Depth of Field

  • Kim, Chang-Ick;Park, Jung-Woo;Lee, Jae-Ho;Hwang, Jenq-Neng
    • ETRI Journal
    • /
    • v.29 no.3
    • /
    • pp.353-362
    • /
    • 2007
  • In this paper, we propose a novel unsupervised video object extraction algorithm for individual images or image sequences with low depth of field (DOF). Low DOF is a popular photographic technique which enables the representation of the photographer's intention by giving a clear focus only on an object of interest (OOI). We first describe a fast and efficient scheme for extracting OOIs from individual low-DOF images and then extend it to deal with image sequences with low DOF in the next part. The basic algorithm unfolds into three modules. In the first module, a higher-order statistics map, which represents the spatial distribution of the high-frequency components, is obtained from an input low-DOF image. The second module locates the block-based OOI for further processing. Using the block-based OOI, the final OOI is obtained with pixel-level accuracy. We also present an algorithm to extend the extraction scheme to image sequences with low DOF. The proposed system does not require any user assistance to determine the initial OOI. This is possible due to the use of low-DOF images. The experimental results indicate that the proposed algorithm can serve as an effective tool for applications, such as 2D to 3D and photo-realistic video scene generation.

  • PDF

An Objective No-Reference Perceptual Quality Assessment Metric based on Temporal Complexity and Disparity for Stereoscopic Video

  • Ha, Kwangsung;Bae, Sung-Ho;Kim, Munchurl
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.255-265
    • /
    • 2013
  • 3DTV is expected to be a promising next-generation broadcasting service. On the other hand, the visual discomfort/fatigue problems caused by viewing 3D videos have become an important issue. This paper proposes a perceptual quality assessment metric for a stereoscopic video (SV-PQAM). To model the SV-PQAM, this paper presents the following features: temporal variance, disparity variation in intra-frames, disparity variation in inter-frames and disparity distribution of frame boundary areas, which affect the human perception of depth and visual discomfort for stereoscopic views. The four features were combined into the SV-PQAM, which then becomes a no-reference stereoscopic video quality perception model, as an objective quality assessment metric. The proposed SV-PQAM does not require a depth map but instead uses the disparity information by a simple estimation. The model parameters were estimated based on linear regression from the mean score opinion values obtained from the subjective perception quality assessments. The experimental results showed that the proposed SV-PQAM exhibits high consistency with subjective perception quality assessment results in terms of the Pearson correlation coefficient value of 0.808, and the prediction performance exhibited good consistency with a zero outlier ratio value.

  • PDF

FFT Based Information Concealing Method for Video Copyright Protection (동영상 저작권보호를 위한 FFT 기반 정보 은닉 기법)

  • Choi, Il-Mok;Hwang, Seon-Cheol
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.62 no.4
    • /
    • pp.204-209
    • /
    • 2013
  • FFT based fingerprinting to conceal more information has developed for video copyright protection. More complex information of video is necessary to prove an ownership and legal distributions in invisible form. This paper describes a method to insert more information and to detect them. $3{\times}3$ points structure is used to present information. The possible ways to show are 8bit, $2^8$ = 256 where one point of 9 is always turn on. The points are marked in frequency domain that both real and imaginary party numbers are modified. The five successive frames of same scenes are used to mark because the same scene has very similar shape in FFT result. However, the detail values of coefficients are totally different each other to recognize the marked points. This paper also describes a method to detect the marked points by averaging and correlation algorithm. The PSNRs of marked images by our method had 51.138[dB] to 51.143[dB]. And we could get the correlation values from 0.79 to 0.87.

Point Cloud Video Codec using 3D DCT based Motion Estimation and Motion Compensation (3D DCT를 활용한 포인트 클라우드의 움직임 예측 및 보상 기법)

  • Lee, Minseok;Kim, Boyeun;Yoon, Sangeun;Hwang, Yonghae;Kim, Junsik;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.680-691
    • /
    • 2021
  • Due to the recent developments of attaining 3D contents by using devices such as 3D scanners, the diversity of the contents being used in AR(Augmented Reality)/VR(Virutal Reality) fields is significantly increasing. There are several ways to represent 3D data, and using point clouds is one of them. A point cloud is a cluster of points, having the advantage of being able to attain actual 3D data with high precision. However, in order to express 3D contents, much more data is required compared to that of 2D images. The size of data needed to represent dynamic 3D point cloud objects that consists of multiple frames is especially big, and that is why an efficient compression technology for this kind of data must be developed. In this paper, a motion estimation and compensation method for dynamic point cloud objects using 3D DCT is proposed. This will lead to switching the 3D video frames into I frames and P frames, which ensures higher compression ratio. Then, we confirm the compression efficiency of the proposed technology by comparing it with the anchor technology, an Intra-frame based compression method, and 2D-DCT based V-PCC.