• Title/Summary/Keyword: 2D Video

Search Result 910, Processing Time 0.025 seconds

A Low-Power 2-D DCT/IDCT Architecture through Dynamic Control of Data Driven and Fine-Grain Partitioned Bit-Slices (데이터에 의한 구동과 세분화된 비트-슬라이스의 동적제어를 통한 저전력 2-D DCT/IDCT 구조)

  • Kim Kyeounsoo;Ryu Dae-Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.2
    • /
    • pp.201-210
    • /
    • 2005
  • This paper proposes a power efficient 2-dimensional DCT/IDCT architecture driven by input data to be processed. The architecture achieves low power by taking advantage of the typically large fraction of zero and small-valued input processing data in video and image data compression. In particular, it skips multiplication by zero and dynamically activates/deactivates required bit-slices of fine-grain bit partitioned adders within multipliers and accumulators using simple input ANDing and bit-slice MASKing. The processed results from 1-D DCT/IDCT do not have unnecessary sign extension bits (SEBs), which are used for further power reduction in matrix transposer. The results extracted by bit-level transition activity simulations indicate significant power reduction compared to conventional designs.

  • PDF

Three Dimensional Tracking of Road Signs based on Stereo Vision Technique (스테레오 비전 기술을 이용한 도로 표지판의 3차원 추적)

  • Choi, Chang-Won;Choi, Sung-In;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.12
    • /
    • pp.1259-1266
    • /
    • 2014
  • Road signs provide important safety information about road and traffic conditions to drivers. Road signs include not only common traffic signs but also warning information regarding unexpected obstacles and road constructions. Therefore, accurate detection and identification of road signs is one of the most important research topics related to safe driving. In this paper, we propose a 3-D vision technique to automatically detect and track road signs in a video sequence which is acquired from a stereo vision camera mounted on a vehicle. First, color information is used to initially detect the sign candidates. Second, the SVM (Support Vector Machine) is employed to determine true signs from the candidates. Once a road sign is detected in a video frame, it is continuously tracked from the next frame until it is disappeared. The 2-D position of a detected sign in the next frame is predicted by the 3-D motion of the vehicle. Here, the 3-D vehicle motion is acquired by using the 3-D pose information of the detected sign. Finally, the predicted 2-D position is corrected by template-matching of the scaled template of the detected sign within a window area around the predicted position. Experimental results show that the proposed method can detect and track many types of road signs successfully. Tracking comparisons with two different methods are shown.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

Object Recognition Face Detection With 3D Imaging Parameters A Research on Measurement Technology (3D영상 객체인식을 통한 얼굴검출 파라미터 측정기술에 대한 연구)

  • Choi, Byung-Kwan;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.10
    • /
    • pp.53-62
    • /
    • 2011
  • In this paper, high-tech IT Convergence, to the development of complex technology, special technology, video object recognition technology was considered only as a smart - phone technology with the development of personal portable terminal has been developed crossroads. Technology-based detection of 3D face recognition technology that recognizes objects detected through the intelligent video recognition technology has been evolving technologies based on image recognition, face detection technology with through the development speed is booming. In this paper, based on human face recognition technology to detect the object recognition image processing technology is applied through the face recognition technology applied to the IP camera is the party of the mouth, and allowed the ability to identify and apply the human face recognition, measurement techniques applied research is suggested. Study plan: 1) face model based face tracking technology was developed and applied 2) algorithm developed by PC-based measurement of human perception through the CPU load in the face value of their basic parameters can be tracked, and 3) bilateral distance and the angle of gaze can be tracked in real time, proved effective.

A Beamforming-Based Video-Zoom Driven Audio-Zoom Algorithm for Portable Digital Imaging Devices

  • Park, Nam In;Kim, Seon Man;Kim, Hong Kook;Kim, Myeong Bo;Kim, Sang Ryong
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.1
    • /
    • pp.11-19
    • /
    • 2013
  • A video-zoom driven audio-zoom algorithm is proposed to provide audio zooming effects according to the degree of video-zoom. The proposed algorithm is designed based on a super-directive beamformer operating with a 4-channel microphone array in conjunction with a soft masking process that uses the phase differences between microphones. The audio-zoom processed signal is obtained by multiplying the audio gain derived from the video-zoom level by the masked signal. The proposed algorithm is then implemented on a portable digital imaging device with a clock speed of 600 MHz after different levels of optimization, such as algorithmic level, C-code and memory optimization. As a result, the processing time of the proposed audio-zoom algorithm occupies 14.6% or less of the clock speed of the device. The performance evaluation conducted in a semi-anechoic chamber shows that the signals from the front direction can be amplified by approximately 10 dB compared to the other directions.

  • PDF

Video Error Concealment using Neighboring Motion Vectors (주변의 움직임 벡터를 사용한 비디오 에러 은닉 기법)

  • 임유두;이병욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.3C
    • /
    • pp.257-263
    • /
    • 2003
  • Error control and concealment in video communication is becoming increasingly important because transmission errors can cause single or multiple loss of macroblocks in video delivery over unreliable channels such as wireless networks and the internet. This paper describes a temporal error concealment by postprocessing. Lost image blocks are overlapped block motion compensated (OBMC) using median of motion vectors from adjacent blocks at the decoder. The results show a significant improvement over zero motion error concealment and other temporal concealment methods such as Motion Vector Rational Interpolation or Side Match Criterion OBMC by 1.4 to 3.5㏈ gain in PSNR. We present experimental results showing improvements in PSNR and computational complexity.

Fast Algorithm for 360-degree Videos Based on the Prediction of Cu Depth Range and Fast Mode Decision

  • Zhang, Mengmeng;Zhang, Jing;Liu, Zhi;Mao, Fuqi;Yue, Wen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.3165-3181
    • /
    • 2019
  • Spherical videos, which are also called 360-degree videos, have become increasingly popular due to the rapid development of virtual reality technology. However, the large amount of data in such videos is a huge challenge for existing transmission system. To use the existing encode framework, it should be converted into a 2D image plane by using a specific projection format, e.g. the equi-rectangular projection (ERP) format. The existing high-efficiency video coding standard (HEVC) can effectively compress video content, but its enormous computational complexity makes the time spent on compressing high-frame-rate and high-resolution 360-degree videos disproportionate to the benefits of compression. Focusing on the ERP format characteristics of 360-degree videos, this work develops a fast decision algorithm for predicting the coding unit depth interval and adaptive mode decision for intra prediction mode. The algorithm makes full use of the video characteristics of the ERP format by dealing with pole and equatorial areas separately. It sets different reference blocks and determination conditions according to the degree of stretching, which can reduce the coding time while ensuring the quality. Compared with the original reference software HM-16.16, the proposed algorithm can reduce time consumption by 39.3% in the all-intra configuration, and the BD-rate increases by only 0.84%.

Use of Visual Digital Media to Develop Creativity: The Example of Video Games

  • V., Zabolotnyuk;S., Khrypko;I., Ostashchuk;D., Chornomordenko;A., Timchenko;T., Motruk;K., Pasko;O., Lobanchuk
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.13-18
    • /
    • 2022
  • In the post-information era, most of technologies have a visual part, or at least some functions related to visualization. It is also one of the popular means of presenting materials in education area. However, despite its popularity, the impact of visualization on the effectiveness of learning still remains controversial. Even more controversial is its usefulness in developing creativity, which is one of the most important skills for today's employee. The authors considered the use of visualization as a tool for the development of children's creativity on the example of learning video games, in particular, ClassCraft to distinguish features that, from the point of view of psychology, may lead to developing creativity even being not useful for educational purposes. It is concluded that video games useful for learning may have features, that are inappropriate in formal educational context, but important to develop creative thinking.

Moving Human Shape and Pose Reconstruction from Video (비디오로부터의 움직이는 3D 인체 형상 및 자세 복원)

  • Han, Ji Soo;Cho, Myung Rai;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.66-68
    • /
    • 2018
  • 본 논문에서는 비디오로부터 추출된 프레임에서 3D 인체 모델의 복원하고 이를 부드럽게 재생될 수 있도록 보정하는 기법을 제안한다. 매개변수 기반의 모델을 사용하여 자세 및 체형을 복원하도록 접근하고 있다. 매개변수 기반의 인체 모델은 다양한 인체 데이터의 학습을 통해 만들어지며 입력 영상으로부터 최적의 자세와 체형 매개변수 값을 찾아 복원하게 된다. 자세 복원은 CNN 을 사용하여 영상으로부터 인체의 관절 위치를 추정하고 3D 모델로부터 2D 로 투영을 통해 관절 간의 거리가 최소화되는 매개변수 값을 찾아 복원한다. 형상 복원은 2D 영상으로부터 취득된 사람의 윤곽 데이터와 3D 모델의 윤곽 데이터 간의 매칭을 통해 복원된다. 이러한 단일 입력 영상에서 비디오와 같은 다중 입력 영상으로 확장하여 칼만 필터를 적용하여 오류 프레임을 검출하고 이전, 이후 프레임의 매개변수와의 보간을 통해 보다 자연스럽고 정확한 모델을 생성한다.

  • PDF

Analysis of Stereoscopic Video Conversion Process and Design of Stereoscopic Conversion Tools (2D 동영상의 3D 입체 변환 절차 분석 및 입체변환 전용 도구 제작을 위한 기능 설계)

  • Lee, Won-Jae;Choi, Yoo-Joo
    • Annual Conference of KIPS
    • /
    • 2011.11a
    • /
    • pp.431-433
    • /
    • 2011
  • 본 논문에서는 기존의 영상 및 동영상 편집 툴을 사용하여 2D 동영상을 3D 입체영상으로 변환하는 절차를 분석하고, 변환 과정에서 나타나는 비효율적 요소 및 자동화 가능 요소 등을 분류한다. 또한, 기존의 3D 입체영상 변환 도구의 종류 및 한계점을 분석하고, 분석 내용을 기반으로 3D 입체 영상 변환 전용도구의 필수 기능을 설계한다.