• Title/Summary/Keyword: Video Processing

Search Result 125, Processing Time 0.251 seconds

3D Video Processing for 3DTV

  • Sohn, Kwang-Hoon
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2007.08b
    • /
    • pp.1231-1234
    • /
    • 2007
  • This paper presents the overview of 3D video processing technologies for 3DTV such as 3D content generation, 3D video codec and video processing techniques for 3D displays. Some experimental results for 3D contents generation are shown in 3D mixed reality and 2D/3D conversion.

  • PDF

A review of missing video frame estimation techniques for their suitability analysis in NPP

  • Chaubey, Mrityunjay;Singh, Lalit Kumar;Gupta, Manjari
    • Nuclear Engineering and Technology
    • /
    • v.54 no.4
    • /
    • pp.1153-1160
    • /
    • 2022
  • The application of video processing techniques are useful for the safety of nuclear power plants by tracking the people online on video to estimate the dose received by staff during work in nuclear plants. Nuclear reactors remotely visually controlled to evaluate the plant's condition using video processing techniques. Internal reactor components should be frequently inspected but in current scenario however involves human technicians, who review inspection videos and identify the costly, time-consuming and subjective cracks on metallic surfaces of underwater components. In case, if any frame of the inspection video degraded/corrupted/missed due to noise or any other factor, then it may cause serious safety issue. The problem of missing/degraded/corrupted video frame estimation is a challenging problem till date. In this paper a systematic literature review on video processing techniques is carried out, to perform their suitability analysis for NPP applications. The limitation of existing approaches are also identified along with a roadmap to overcome these limitations.

Implementation of Video Processing Module for Integrated Modular Avionics System (모듈통합형 항공전자시스템을 위한 Video Processing Module 구현)

  • Jeon, Eun-Seon;Kang, Dae-Il;Ban, Chang-Bong;Yang, Seong-Yul
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.5
    • /
    • pp.437-444
    • /
    • 2014
  • The integrated modular avionics (IMA) system has quite a number of line repalceable moduels (LRMs) in a cabinet. The LRM performs functions like line replaceable units (LRUs) in federated architecture. The video processing module (VPM) acts as a video bus bridge and gateway of ARINC 818 avionics digital video bus (ADVB). The VPM is a LRM in IMA core system. The ARINC 818 video interface and protocol standard was developed for high-bandwidth, low-latency and uncompressed digital video transmission. FPGAs of the VPM include video processing function such as ARINC 818 to DVI, DVI to ARINC 818 convertor, video decoder and overlay. In this paper we explain how to implement VPM's Hardware. Also we show the verification results about VPM functions and IP core performance.

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

  • PDF

Smart Camera Technology to Support High Speed Video Processing in Vehicular Network (차량 네트워크에서 고속 영상처리 기반 스마트 카메라 기술)

  • Son, Sanghyun;Kim, Taewook;Jeon, Yongsu;Baek, Yunju
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.1
    • /
    • pp.152-164
    • /
    • 2015
  • A rapid development of semiconductors, sensors and mobile network technologies has enable that the embedded device includes high sensitivity sensors, wireless communication modules and a video processing module for vehicular environment, and many researchers have been actively studying the smart car technology combined on the high performance embedded devices. The vehicle is increased as the development of society, and the risk of accidents is increasing gradually. Thus, the advanced driver assistance system providing the vehicular status and the surrounding environment of the vehicle to the driver using various sensor data is actively studied. In this paper, we design and implement the smart vehicular camera device providing the V2X communication and gathering environment information. And we studied the method to create the metadata from a received video data and sensor data using video analysis algorithm. In addition, we invent S-ROI, D-ROI methods that set a region of interest in a video frame to improve calculation performance. We performed the performance evaluation for two ROI methods. As the result, we confirmed the video processing speed that S-ROI is 3.0 times and D-ROI is 4.8 times better than a full frame analysis.

A Parallelization Technique with Integrated Multi-Threading for Video Decoding on Multi-core Systems

  • Hong, Jung-Hyun;Kim, Won-Jin;Chung, Ki-Seok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.7 no.10
    • /
    • pp.2479-2496
    • /
    • 2013
  • Increasing demand for Full High-Definition (FHD) video and Ultra High-Definition (UHD) video services has led to active research on high speed video processing. Widespread deployment of multi-core systems has accelerated studies on high resolution video processing based on parallelization of multimedia software. Even if parallelization of a specific decoding step may improve decoding performance partially, such partial parallelization may not result in sufficient performance improvement. Particularly, entropy decoding has often been considered separately from other decoding steps since the entropy decoding step could not be parallelized easily. In this paper, we propose a parallelization technique called Integrated Multi-Threaded Parallelization (IMTP) which takes parallelization of the entropy decoding step, with other decoding steps, into consideration in an integrated fashion. We used the Simultaneous Multi-Threading (SMT) technique with appropriate thread scheduling techniques to achieve the best performance for the entire decoding step. The speedup of the proposed IMTP method is up to 3.35 times faster with respect to the entire decoding time over a conventional decoding technique for H.264/AVC videos.

A Study of ATM filter for Resolving the Over Segmentation in Image Segmentation of Region-based method (영역기반 방법의 영상 분할에서 과분할 방지를 위한 Adaptive Trimmed Mean 필터에 관한 연구)

  • Lee, Wan-Bum
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.3
    • /
    • pp.42-47
    • /
    • 2007
  • Video Segmentation is an essential part in region-based video coding and any other fields of the video processing. Among lots of methods proposed so far, the watershed method in which the region growing is performed for the gradient image can produce well-partitioned regions globally without any influence on local noise and extracts accurate boundaries. But, it generates a great number of small regions, which we call over segmentation problem. Therefore we proposes that adaptive trimmed mean filter for resolving the over segmentation of image. Simulation result, we confirm that proposed ATM filter improves the performance to remove noise and reduces damage for the clear degree of image in case of the noise ratio of 20% and over.

Digital Video Warping for Convergence of Projection TV Receivers (프로젝션 TV에서의 광학적 왜곡 보정 알고리즘)

  • Hwang, Kyu-Young;Shin, Hyun-Chool;Woong Seo;Song, Woo-Jin
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.535-538
    • /
    • 2001
  • In this paper, we present a novel method to solve the inevitable RGB beam mismatch problem in projection TV receivers. Conventional methods solve the mismatch problem by directly controlling the cathode ray tube (CRT) using the convergence yoke (CY). Unlike conventional methods, the proposed method is based on digital video processing using image warping techniques. Firstly RGB beam projection paths are mathematically modeled. Then based on the modeling, the input video signal to CRT is prewarped so that RGB beams are landed at the same point on the screen. Since the proposed method is based on a digital video processing instead of using CY, it can outperform the conventional method in terms of quality and cost. The experimental results with a real 60´projection TV demonstrate that the proposed method indeed produces converged images on the projection TV screen.

  • PDF

Multi-View Video Processing: IVR, Graphics Composition, and Viewer

  • Kwon, Jun-Sup;Hwang, Won-Young;Choi, Chang-Yeol;Chang, Eun-Young;Hur, Nam-Ho;Kim, Jin-Woong;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.12 no.4
    • /
    • pp.333-341
    • /
    • 2007
  • Multi-view video has recently gained much attraction from academic and commercial fields because it can deliver the immersive viewing of natural scenes. This paper presents multi-view video processing being composed of intermediate view reconstruction (IVR), graphics composition, and multi-view video viewer. First we generate virtual views between multi-view cameras using depth and texture images of the input videos. Then we mix graphic objects to the generated view images. The multi-view video viewer is developed to examine the reconstructed images and composite images. As well, it can provide users with some special effects of multi-view video. We present experimental results that validate our proposed method and show that graphic objects could become the inalienable part of the multi-view video.