• 제목/요약/키워드: real-time video

검색결과 1,672건 처리시간 0.028초

무인 항공기 촬영 동영상을 위한 실시간 안정화 기법 (Real-time Stabilization Method for Video acquired by Unmanned Aerial Vehicle)

  • 조현태;배효철;김민욱;윤경로
    • 반도체디스플레이기술학회지
    • /
    • 제13권1호
    • /
    • pp.27-33
    • /
    • 2014
  • Video from unmanned aerial vehicle (UAV) is influenced by natural environments due to the light-weight UAV, specifically by winds. Thus UAV's shaking movements make the video shaking. Objective of this paper is making a stabilized video by removing shakiness of video acquired by UAV. Stabilizer estimates camera's motion from calculation of optical flow between two successive frames. Estimated camera's movements have intended movements as well as unintended movements of shaking. Unintended movements are eliminated by smoothing process. Experimental results showed that our proposed method performs almost as good as the other off-line based stabilizer. However estimation of camera's movements, i.e., calculation of optical flow, becomes a bottleneck to the real-time stabilization. To solve this problem, we make parallel stabilizer making average 30 frames per second of stabilized video. Our proposed method can be used for the video acquired by UAV and also for the shaking video from non-professional users. The proposed method can also be used in any other fields which require object tracking, or accurate image analysis/representation.

Exploiting Packet Semantics in Real-time Multimedia Streaming

  • Hong, Sung-Woo;Won, You-Jip
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송공학회 2009년도 IWAIT
    • /
    • pp.118-123
    • /
    • 2009
  • In this paper, we propose packet selection and significance based interval allocation algorithm for real-time streaming service. In real-time streaming of inter-frame (and layer) coded video, minimizing packet loss does not imply maximizing QoS. It is true that packet loss adversely affects the QoS but one single packet can have more impact than several other packets. We exploit the fact that the significance of each packet loss is different from the frame type it belongs to and its position within GoP. Using packet dependency and PSNR degradation value imposed on the video from the corresponding packet loss, we find each packet's significance value. Based on the packet significance, the proposed algorithm determines which packets to send and when to send them. The proposed algorithm is tested using publicly available MPEG-4 video traces. Our scheduling algorithm brings significant improvement on user perceivable QoS. We foresee that the proposed algorithm manifests itself in last mile connection of the network where intervals between successive packets from the source and to the destination are well preserved.

  • PDF

HSDPA 기반 실시간 영상 전송 및 위치 인식 시스템 (A Real-time Video Transferring and Localization System in HSDPA Network)

  • 곽성우;최홍;양정민
    • 한국전자통신학회논문지
    • /
    • 제7권1호
    • /
    • pp.21-26
    • /
    • 2012
  • 본 논문에서는 HSDPA 상용 무선 네트워크 환경을 이용하여 실시간으로 영상 데이터를 전송하고 위치를 인식하는 시스템을 제안한다. 이번 연구에서는 MPEG4를 기반으로 하는 새로운 영상 압축 알고리듬을 개발하여 130 kbps 대역폭과 30 fps의 QVGA 영상 전송률을 실현하였다. 이동 차량에 탑재할 목적으로 본 시스템을 소형화하고 전력 효율을 좋게 하였으며 외란에도 견실하게 설계하였다. 시스템을 실제 구동시켜 얻은 동영상 캡쳐 화면과 위치 인식 데이터를 제시하여 개발한 시스템의 성능을 검증한다. 본 시스템은 순찰차 및 대중교통 시스템에 적용하는 것을 목표로 하고 있으며 유선 전송이 어려운 오지 환경에서 실시간으로 영상정보를 획득하고자 할 때도 적용 가능하다.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제11권4호
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

Real-time multi-GPU-based 8KVR stitching and streaming on 5G MEC/Cloud environments

  • Lee, HeeKyung;Um, Gi-Mun;Lim, Seong Yong;Seo, Jeongil;Gwak, Moonsung
    • ETRI Journal
    • /
    • 제44권1호
    • /
    • pp.62-72
    • /
    • 2022
  • In this study, we propose a multi-GPU-based 8KVR stitching system that operates in real time on both local and cloud machine environments. The proposed system first obtains multiple 4 K video inputs, decodes them, and generates a stitched 8KVR video stream in real time. The generated 8KVR video stream can be downloaded and rendered omnidirectionally in player apps on smartphones, tablets, and head-mounted displays. To speed up processing, we adopt group-of-pictures-based distributed decoding/encoding and buffering with the NV12 format, along with multi-GPU-based parallel processing. Furthermore, we develop several algorithms such as equirectangular projection-based color correction, real-time CG overlay, and object motion-based seam estimation and correction, to improve the stitching quality. From experiments in both local and cloud machine environments, we confirm the feasibility of the proposed 8KVR stitching system with stitching speed of up to 83.7 fps for six-channel and 62.7 fps for eight-channel inputs. In addition, in an 8KVR live streaming test on the 5G MEC/cloud, the proposed system achieves stable performances with 8 K@30 fps in both indoor and outdoor environments, even during motion.

Performance Study on ZigBee-Based Wireless Personal Area Networks for Real-Time Health Monitoring

  • Koh, Bernard Kai-Ping;Kong, Peng-Yong
    • ETRI Journal
    • /
    • 제28권4호
    • /
    • pp.537-540
    • /
    • 2006
  • When multiple ZigBee wireless personal area networks (WPANs) are in close proximity to each other, contentions and collisions in transmissions will lead to increased packet delays. However, there is no existing study on how delay performance would be affected in a crowded real-life environment where each person walking down a busy street would be wearing a ZigBee WPAN. This letter studies the use of ZigBee WPANs in such a real-life environment for real-time heart beat monitoring. To be pragmatic, we derived a mobility pattern from the analysis of a real-life video trace. Then, we estimated the delay performance from the video trace by combining data collected from ZigBee experiments. The results show that the 300 ms packet delay requirement will not be met for only 11% of the time. When failure occurs, it will last for an average duration of 1.4 s.

  • PDF

멀티쓰레드와 SIMD 명령어를 이용한 실시간 H.264/AVC High 4:4:4 Predictive 디코더의 구현 (Real-time H.264/AVC High 4:4:4 Predictive Decoder Using Multi-Thread and SIMD Instructions)

  • 김용환;김재우;최병호;이석필;백준기
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 한국정보통신설비학회 2007년도 학술대회
    • /
    • pp.350-353
    • /
    • 2007
  • This paper presents an real-time implementation of H.264/AVC High 4:4:4 Predictive profile decoder using general-purpose processors by exploiting multi-threading technique and Single Instruction Multiple Data (SIMD) instructions without any quality degradation. We analyze differences between the existing High profile and High 4:4:4 Predictive profile decoder, and show various optimization techniques to decode high fidelity and high definition (HD) video in real-time. Simulation results show that the proposed decoder can play high fidelity HD video at average 40 frames per seconds (fps) for the IBBrBP bistream and about 50 fps for the Intra-only bitstream.

  • PDF

대규모 점군 및 폴리곤 모델의 GLSL 기반 실시간 렌더링 알고리즘 (A Real-Time Rendering Algorithm of Large-Scale Point Clouds or Polygon Meshes Using GLSL)

  • 박상근
    • 한국CDE학회논문집
    • /
    • 제19권3호
    • /
    • pp.294-304
    • /
    • 2014
  • This paper presents a real-time rendering algorithm of large-scale geometric data using GLSL (OpenGL shading language). It details the VAO (vertex array object) and VBO(vertex buffer object) to be used for up-loading the large-scale point clouds and polygon meshes to a graphic video memory, and describes the shader program composed by a vertex shader and a fragment shader, which manipulates those large-scale data to be rendered by GPU. In addition, we explain the global rendering procedure that creates and runs the shader program with the VAO and VBO. Finally, a rendering performance will be measured with application examples, from which it will be demonstrated that the proposed algorithm enables a real-time rendering of large amount of geometric data, almost impossible to carry out by previous techniques.

REPRESENTATION OF NAVIGATION INFORMATION FOR VISUAL CAR NAVIGATION SYSTEM

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2007년도 Proceedings of ISRS 2007
    • /
    • pp.508-511
    • /
    • 2007
  • Car navigation system is one of the most important applications in telematics. A newest trend of car navigation system is using real video captured by camera equipped on the vehicle, because video can overcome the semantic gap between map and real world. In this paper, we suggest a visual car navigation system that visually represents navigation information or route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid on it. Main services of the visual car navigation system are graphical turn guidance and lane change guidance. We suggest the system architecture that implements the services by integrating conventional route finding and guidance, computer vision functions, and augmented reality display functions. What we designed as a core part of the system is visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to a determination rule based on current location and driving circumstances. We briefly show the implementation of system.

  • PDF

임베디드 시스템에서의 다중 표준 영상 코덱 (Multi-standard Video Codec on Embedded System)

  • 김기철;김민
    • 전자공학회논문지CI
    • /
    • 제40권4호
    • /
    • pp.214-221
    • /
    • 2003
  • 본 논문에서는 H.261과 H.263 표준을 모두 만족하는 영상 코텍을 임베디드 시스템에서 구현한다. 효율적인 실시간 처리를 위하여, 영상 코덱은 하드웨어 모듈과 소프트웨어 모듈로 구분되어 임베디드 시스템에서 통합 설계된다. 소프트웨어 모듈은 실시간 운영체제와 RISC 프로세서를 이용하여 수행되며, 하드웨어 모듈과 연동하여 실시간으로 영상을 압축하고 복원한다. 시스템 버스로는 AMBA AHB가 사용되며 하드웨어 모듈은 AMBA AHB의 마스터(master)와 슬레이브(slave)의 역할을 모두 수행한다. 영상 압축과정을 실시간으로 처리하기 위하여 인코더의 하드웨어 모듈은 파이프라인으로 설계된다. 구현된 영상 코덱은 H.261과 H.263 표준에 준하여 33㎒의 동작 주파수에서 1초 동안에 CIF 화면 15장을 동시에 압축하고 복원한다.