• Title/Summary/Keyword: 초당 프레임 수

Search Result 157, Processing Time 0.027 seconds

Hardware Implementation of a Fast Inter Prediction Engine for MPEG-4 AVC (MPEG-4 AVC를 위한 고속 인터 예측기의 하드웨어 구현)

  • Lim Young hun;Lee Dae joon;Jeong Yong jin
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.3C
    • /
    • pp.102-111
    • /
    • 2005
  • In this paper, we propose an advanced hardware architecture for the fast inter prediction engine of the video coding standard MPEG-4 AVC. We describe the algorithm and derive the hardware architecture emphasizing and real time operation of the quarter_pel based motion estimation. The fast inter prediction engine is composed of block segmentation, motion estimation, motion compensation, and the fast quarter_pel calculator. The proposed architecture has been verified by ARM-interfaced emulation board using Excalibur & Virtex2 FPGA, and also by synthesis on Samsung 0.18 um CMOS technology. The synthesis result shows that the proposed hardware can operate at 62.5MHz. In this case, it can process about 88 QCIF video frames per second. The hardware is being used as a core module when implementing a complete MPEG-4 AVC video encoder chip for real-time multimedia application.

VTF: A Timer Hypercall to Support Real-time of Guest Operating Systems (VIT: 게스트 운영체제의 실시간성 지원을 위한 타이머 하이퍼콜)

  • Park, Mi-Ri;Hong, Cheol-Ho;Yoo, See-Hwan;Yoo, Chuck
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.37 no.1
    • /
    • pp.35-42
    • /
    • 2010
  • Guest operating systems running over the virtual machines share a variety of resources. Since CPU is allocated in a time division manner it consequently leads them to having the unknown physical time. It is not regarded as a serious problem in the server virtualization fields. However, it becomes critical in embedded systems because it prevents guest OS from executing real time tasks when it does not occupy CPU. In this paper we propose a hypercall to register a timer service to notify the timer request related real time. It enables hypervisor to schedule a virtual machine which has real time tasks to execute, and allows guest OS to take CPU on time to support real time. The following experiment shows its implementation on Xen-Arm and para-virtualized Linux. We also analyze the real time performance with response time of test application and frames per second of Mplayer.

A Realtime Music Editing and Playback System in An Augmented Reality Environments (증강 현실 기반의 실시간 음악 편집 및 재생 시스템)

  • Kim, Eun-Young;Oh, Dong-Yeol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.6
    • /
    • pp.79-88
    • /
    • 2011
  • In this paper, We propose real-time sound editing and playback systems which is based on Augmented Reality. The proposed system are composed with music maker which is based on AR maker and music board. By using music marker's contents, the proposed system selects the kinds of musical instruments and pre-defined midi track and by calculating the relative location of music marker on 2-dimensional plane, we set the spatial relative parameter in midi track. For performance evaluation, we check the jitter value of in various resolutions by using CAM which supports $1600{\pm}1200$ as the maximum resolution. As a result, when we set the configuration value of CAM as $860{\pm}600$ pixels and process two frames per minute, the success ratio of recognizing music markers and jitter values are accegnable. It can be utilized in the fields of alternative cmacine which is based on music and also be utilized in the educational aspects because child or elderly who don't know enough musical theory can easily handle it.

Intelligent Mobile Surveillance System Based on Wireless Communication (무선통신에 기반한 지능형 이동 감시 시스템 개발)

  • Jang, Jae-Hyuk;Sim, Gab-Sig
    • The Journal of the Korea Contents Association
    • /
    • v.15 no.2
    • /
    • pp.11-20
    • /
    • 2015
  • In this paper, we develop an intelligent mobile surveillance system based on binary CDMA for the unmanned automatic tracking and surveillance. That is, we implement a intelligent surveillance system using the binary CDMA wireless communication technology which is applied the merit of CDMA and TDMA on it complexly. This system is able to monitor the site of the accident on network in real time and process the various situations by implementing the security surveillance system. This system pursues an object by the 360-degree using camera, expands image using a PTZ(Pan/Tilt/Zoom) camera zooming function, identifies the mobile objects image within a screen and transfers the identified image to the remote site. Finally, we show the efficiency of the implemented system through the simulation of the controlled situations, such as tracking coverage on objects, object expansion, object detection number, monitoring the remote transferred image, number of frame per second by the image output signal etc..

VLSI Design of H.263 Video Codec Based on Modular Architecture (모듈화된 구조에 기반한 H.263 비디오 코덱 VLSI의 설계)

  • Kim, Myung-Jin;Lee, Sang-Hee;Kim, Keun-Bae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.5
    • /
    • pp.477-485
    • /
    • 2002
  • In this paper, we present an efficient hardware architecture for the H.263 video codec and its VLSI implementation. This architecture is based on the unified interface by which internal hardware engines and an internal RISC processor are connected one another. The unified interface enables the modular design of internal blocks, efficient hardware/software partitioning, and pipelined paralled operations. The developed VLSI supports the H.263 version 2 profile 3 @ level 10, and moreover, both the control protocol H.245 and the multiplexing protocol H.223. Therefore, it can be used for the complete ITU-T H.324 or 3GPP 3G 324M multimedia processor with the help of an external audio codec. Simultaneous encoding and decoding of QCIF format images at a rate greater than 15 frames per second is achieved at 40 MHz clock frequency.

Real-Time Indirect Illumination using a Light Quad-Tree (광원 트리를 사용한 간접 조명의 실시간 렌더링)

  • Ki, Hyun-Woo;Oh, Kyoung-Su
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.34 no.4
    • /
    • pp.158-167
    • /
    • 2007
  • Indirect illumination plays an important role for realistic image synthesis. We present a novel realtime indirect illumination rendering technique using image pyramids. Hundreds of thousands of indirect point light sources are stored into images, and then they hierarchically clustered into quad-tree image pyramids. We also introduce a GPU based top-down and breadth-first traversal of the quad-trees to approximate the illumination with clusters (set of lights). All steps entirely run on the GPU in real-time. Result images demonstrate that our method represents diffuse interreflection, especially a color bleeding effect well. We achieved interactive frame rates of tens to hundreads, without any preprocessing. We can avoid artifacts caused by sampling, and our method is seven times faster than a recently proposed sampling based method.

Interactive Visualization Technique for Adaptive Mesh Refinement Data Using Hierarchical Data Structures and Graphics Hardware (계층적 자료구조와 그래픽스 하드웨어를 이용한 적응적 메쉬 세분화 데이타의 대화식 가시화)

  • ;Chandrajit Bajaj
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.31 no.5_6
    • /
    • pp.360-370
    • /
    • 2004
  • Adaptive mesh refinement(AMR) is one of the popular computational simulation techniques used in various scientific and engineering fields. Although AMR data is organized in a hierarchical multi-resolution data structure, traditional volume visualization algorithms such as ray-casting and splatting cannot handle the form without converting it to a sophisticated data structure. In this paper, we present a hierarchical multi-resolution splatting technique using k-d trees and octrees for AMR data that is suitable for implementation on the latest consumer PC graphics hardware. We describe a graphical user interface to set transfer function and viewing / rendering parameters interactively. Experimental results obtained on a general purpose PC equipped with an nVIDIA GeForce3 card are presented to demonstrate that the proposed techniques can interactively render AMR data(over 20 frames per second). Our scheme can easily be applied to parallel rendering of time-varying AMR data.

Detection of the co-planar feature points in the three dimensional space (3차원 공간에서 동일 평면 상에 존재하는 특징점 검출 기법)

  • Seok-Han Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.6
    • /
    • pp.499-508
    • /
    • 2023
  • In this paper, we propose a technique to estimate the coordinates of feature points existing on a 2D planar object in the three dimensional space. The proposed method detects multiple 3D features from the image, and excludes those which are not located on the plane. The proposed technique estimates the planar homography between the planar object in the 3D space and the camera image plane, and computes back-projection error of each feature point on the planar object. Then any feature points which have large error is considered as off-plane points and are excluded from the feature estimation phase. The proposed method is archived on the basis of the planar homography without any additional sensors or optimization algorithms. In the expretiments, it was confirmed that the speed of the proposed method is more than 40 frames per second. In addition, compared to the RGB-D camera, there was no significant difference in processing speed, and it was verified that the frame rate was unaffected even in the situation that the number of detected feature points continuously increased.

Eye Tracking Using Neural Network and Mean-shift (신경망과 Mean-shift를 이용한 눈 추적)

  • Kang, Sin-Kuk;Kim, Kyung-Tai;Shin, Yun-Hee;Kim, Na-Yeon;Kim, Eun-Yi
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.1
    • /
    • pp.56-63
    • /
    • 2007
  • In this paper, an eye tracking method is presented using a neural network (NN) and mean-shift algorithm that can accurately detect and track user's eyes under the cluttered background. In the proposed method, to deal with the rigid head motion, the facial region is first obtained using skin-color model and con-nected-component analysis. Thereafter the eye regions are localized using neural network (NN)-based tex-ture classifier that discriminates the facial region into eye class and non-eye class, which enables our method to accurately detect users' eyes even if they put on glasses. Once the eye region is localized, they are continuously and correctly tracking by mean-shift algorithm. To assess the validity of the proposed method, it is applied to the interface system using eye movement and is tested with a group of 25 users through playing a 'aligns games.' The results show that the system process more than 30 frames/sec on PC for the $320{\times}240$ size input image and supply a user-friendly and convenient access to a computer in real-time operation.

A Real-time SoC Design of Foreground Object Segmentation (Foreground 객체 추출을 위한 실시간 SoC 설계)

  • Kim Ji-Su;Lee Tae-Ho;Lee Hyuk-Jae
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.9 s.351
    • /
    • pp.44-52
    • /
    • 2006
  • Recently developed MPEG-4 Part 2 compression standard provides a novel capability to handle arbitrary video objects. To support this capability, an efficient object segmentation technique is required. This paper proposes a real-time algorithm for foreground object segmentation in video sequences. The proposed algorithm consists of two steps: the first step that segments a video frame into multiple sub-regions using Spatio-Temporal Watershed Transform and the second step in which a foreground object segment is extracted from the sub-regions generated in the first step. For real-time processing, the algorithm is partitioned into hardware and software parts so that computationally expensive parts are off-loaded from a processor and executed by hardware accelerators. Simulation results show that the proposed implementation can handle QCIF-size video at 15 fps and extracts an accurate foreground object.