• Title/Summary/Keyword: real-time video

Search Result 1,668, Processing Time 0.026 seconds

Removal of Complexity Management in H.263 Codec for A/VDelivery Systems

  • Jalal, Ahmad;Kim, Sang-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.931-936
    • /
    • 2006
  • This paper presents different issues of the real-time compression algorithms without compromising the video quality in the distributed environment. The theme of this research is to manage the critical processing stages (speed, information lost, redundancy, distortion) having better encoded ratio, without the fluctuation of quantization scale by using IP configuration. In this paper, different techniques such as distortion measure with searching method cover the block phenomenon with motion estimation process while passing technique and floating measurement is configured by discrete cosine transform (DCT) to reduce computational complexity which is implemented in this video codec. While delay of bits in encoded buffer side especially in real-time state is being controlled to produce the video with high quality and maintenance a low buffering delay. Our results show the performance accuracy gain with better achievement in all the above processes in an encouraging mode.

  • PDF

Real-time Camera and Video Streaming Through Optimized Settings of Ethernet AVB in Vehicle Network System

  • An, Byoungman;Kim, Youngseop
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.8
    • /
    • pp.3025-3047
    • /
    • 2021
  • This paper presents the latest Ethernet standardization of in-vehicle network and the future trends of automotive Ethernet technology. The proposed system provides design and optimization algorithms for automotive networking technology related to AVB (Audio Video Bridge) technology. We present a design of in-vehicle network system as well as the optimization of AVB for automotive. A proposal of Reduced Latency of Machine to Machine (RLMM) plays an outstanding role in reducing the latency among devices. RLMM's approach to real-world experimental cases indicates a reduction in latency of around 41.2%. The setup optimized for the automotive network environment is expected to significantly reduce the time in the development and design process. The results obtained in the study of image transmission latency are trustworthy because average values were collected over a long period of time. It is necessary to analyze a latency between multimedia devices within limited time which will be of considerable benefit to the industry. Furthermore, the proposed reliable camera and video streaming through optimized AVB device settings would provide a high level of support in the real-time comprehension and analysis of images with AI (Artificial Intelligence) algorithms in autonomous driving.

An Analysis of Big Video Data with Cloud Computing in Ubiquitous City (클라우드 컴퓨팅을 이용한 유시티 비디오 빅데이터 분석)

  • Lee, Hak Geon;Yun, Chang Ho;Park, Jong Won;Lee, Yong Woo
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.45-52
    • /
    • 2014
  • The Ubiquitous-City (U-City) is a smart or intelligent city to satisfy human beings' desire to enjoy IT services with any device, anytime, anywhere. It is a future city model based on Internet of everything or things (IoE or IoT). It includes a lot of video cameras which are networked together. The networked video cameras support a lot of U-City services as one of the main input data together with sensors. They generate huge amount of video information, real big data for the U-City all the time. It is usually required that the U-City manipulates the big data in real-time. And it is not easy at all. Also, many times, it is required that the accumulated video data are analyzed to detect an event or find a figure among them. It requires a lot of computational power and usually takes a lot of time. Currently we can find researches which try to reduce the processing time of the big video data. Cloud computing can be a good solution to address this matter. There are many cloud computing methodologies which can be used to address the matter. MapReduce is an interesting and attractive methodology for it. It has many advantages and is getting popularity in many areas. Video cameras evolve day by day so that the resolution improves sharply. It leads to the exponential growth of the produced data by the networked video cameras. We are coping with real big data when we have to deal with video image data which are produced by the good quality video cameras. A video surveillance system was not useful until we find the cloud computing. But it is now being widely spread in U-Cities since we find some useful methodologies. Video data are unstructured data thus it is not easy to find a good research result of analyzing the data with MapReduce. This paper presents an analyzing system for the video surveillance system, which is a cloud-computing based video data management system. It is easy to deploy, flexible and reliable. It consists of the video manager, the video monitors, the storage for the video images, the storage client and streaming IN component. The "video monitor" for the video images consists of "video translater" and "protocol manager". The "storage" contains MapReduce analyzer. All components were designed according to the functional requirement of video surveillance system. The "streaming IN" component receives the video data from the networked video cameras and delivers them to the "storage client". It also manages the bottleneck of the network to smooth the data stream. The "storage client" receives the video data from the "streaming IN" component and stores them to the storage. It also helps other components to access the storage. The "video monitor" component transfers the video data by smoothly streaming and manages the protocol. The "video translator" sub-component enables users to manage the resolution, the codec and the frame rate of the video image. The "protocol" sub-component manages the Real Time Streaming Protocol (RTSP) and Real Time Messaging Protocol (RTMP). We use Hadoop Distributed File System(HDFS) for the storage of cloud computing. Hadoop stores the data in HDFS and provides the platform that can process data with simple MapReduce programming model. We suggest our own methodology to analyze the video images using MapReduce in this paper. That is, the workflow of video analysis is presented and detailed explanation is given in this paper. The performance evaluation was experiment and we found that our proposed system worked well. The performance evaluation results are presented in this paper with analysis. With our cluster system, we used compressed $1920{\times}1080(FHD)$ resolution video data, H.264 codec and HDFS as video storage. We measured the processing time according to the number of frame per mapper. Tracing the optimal splitting size of input data and the processing time according to the number of node, we found the linearity of the system performance.

Accurate Prediction of Real-Time MPEG-4 Variable Bit Rate Video Traffic

  • Lee, Kang-Yong;Kim, Moon-Seong;Jang, Hee-Seon;Cho, Kee-Seong
    • ETRI Journal
    • /
    • v.29 no.6
    • /
    • pp.823-825
    • /
    • 2007
  • In this letter, we propose a novel algorithm to predict MPEG-coded real-time variable bit rate (VBR) video traffic. From the frame size measurement, the algorithm extracts the statistical property of video traffic and utilizes it for the prediction of the next frame for I-, P-, and B- frames. The simulation results conducted with real-world MPEG-4 VBR video traces show that the proposed algorithm is capable of providing more accurate prediction than those in the research literature.

  • PDF

A Study on the Implementation of the Picture segmentation for a Real-Time Automatic Video Tracker System (실시간 자동영상 추적기를 위한 영상영역화의 구현에 관한 연구)

  • 문종환;김경수;김재희
    • Proceedings of the Korean Institute of Communication Sciences Conference
    • /
    • 1986.10a
    • /
    • pp.186-190
    • /
    • 1986
  • This paper describes a way of implementing the segmentation of 128*128 pixel images to be used as the inputs. to a real-time automatic video tracker. The suggested method uses the lowest valley-value of the computed intensity historgram with 16 levels. This method improves smoothing effects and also significantly reduces hardware requirements. Entire segmentation process is caried out in 10msec thus making a real time application possible.

  • PDF

The Efficacy of Zoom Technology as an Educational Tool for English Reading Comprehension Achievement in EFL Classroom

  • Kim, HyeJeong
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.3
    • /
    • pp.198-205
    • /
    • 2020
  • The purpose of this study is to investigate the effect of real-time remote video instruction using zoom on learners' English reading achievement. The study also sought to identify the efficiency of zoom video lectures and consider supplementing them by surveying learners' opinions and satisfaction regarding zoom video lectures. To this end, control and experimental groups were set up, and two achievement tests and a questionnaire were conducted. The study's results demonstrated that zoom video lectures have a positive effect on learners' English reading achievement. The questionnaire found that learners are satisfied with zoom video lectures for the following reasons: 'increased interest in and motivation towards learning', 'self-directed learning', 'active interaction', 'ease of access', 'ease of information retrieval'. At the same time, the questionnaire also found that some learners are dissatisfied with zoom video lectures due to 'mechanical errors or defects', 'poor audio quality', and 'the need to add customized functions for efficient classes'. In practice, zoom video lectures must be supplemented with automatic attendance processing, convenient data upload and download, and more efficient video screen management. Given the recent increase in online classes, we, as instructors, must develop teaching activities and/or strategies for video lectures that can encourage active participation by learners.

The Implementation of DSP-Based Real-Time Video Transmission System using In-Vehicle Multimedia Network (차량 내 멀티미디어 네트워크를 이용한 DSP 기반 실시간 영상 전송 시스템의 구현)

  • Jeon, Young-Joon;Kim, Jin-II
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.14 no.1
    • /
    • pp.62-69
    • /
    • 2013
  • This paper proposes real-time video transmission system by the car-mounted cameras based on MOST Network. Existing vehicles transmit videos by connecting the car-mounted cameras in the form of analog. However, the increase in the number of car-mounted cameras leads to development of the network to connect the cameras. In this paper, DSP is applied to process MPEG 2 encoding/decoding for real-time video transmission in a short period of time. MediaLB is employed to transfer data stream between DSP and MOST network controller. During this procedure, DSP cannot transport data stream directly from MediaLB. Therefore, FPGA is used to deliver data stream transmitting MediaLB to DSP. MediaLB is designed to streamline hardware/software application development for MOST Network and to support all MOST Network data transportation methods. As seen in this paper, the test results verify that real-time video transmission using proposed system operates in a normal matter.

An Advanced Coding for Video Streaming System: Hardware and Software Video Coding

  • Le, Tuan Thanh;Ryu, Eun-Seok
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.51-57
    • /
    • 2020
  • Currently, High-efficient video coding (HEVC) has become the most promising video coding technology. However, the implementation of HEVC in video streaming systems is restricted by factors such as cost, design complexity, and compatibility with existing systems. While HEVC is considering deploying to various systems with different reached methods, H264/AVC can be one of the best choices for current video streaming systems. This paper presents an adaptive method for manipulating video streams using video coding on an integrated circuit (IC) designed with a private network processor. The proposed system allows to transfer multimedia data from cameras or other video sources to client. For this work, a series of video or audio packages from the video source are forwarded to the designed IC via HDMI cable, called Tx transmitter. The Tx processes input data into a real-time stream using its own protocol according to the Real-Time Transmission Protocol for both video and audio, then Tx transmits output packages to the video client though internet. The client includes hardware or software video/audio decoders to decode the received packages. Tx uses H264/AVC or HEVC video coding to encode video data, and its audio coding is PCM format. By handling the message exchanges between Tx and the client, the transmitted session can be set up quickly. Output results show that transmission's throughput can be achieved about 50 Mbps with approximately 80 msec latency.

Implementation of Optimized 3D Input & Output Systems for Web-based Real-time 3D Video Communication (웹 기반의 입체 동영상 통신을 위한 3차원 입출력 시스템의 최적화 구현)

  • Ko, Jung-Hwan;Lee, Jung-Suk;An, Young-Hwan
    • 전자공학회논문지 IE
    • /
    • v.43 no.4
    • /
    • pp.105-114
    • /
    • 2006
  • In this paper, 3D input and output systems for a web-based real-time 3D video communication system using IEEE 1394 digital cameras, Intel Xeon Server system and Microsoft Directshow library is proposed. And some conditions for optimizing the operations of the stereo camera, 3D display and signal processing system are analyzed. Input & output systems are carefully selected, which can satisfy the required optimization conditions and the final 3D video communication system is implemented by using three optimized devices. The overall control system is developed with Microsoft Visual C++.Net and Microsoft DirectX 9.1 SDK. Some experimental results show that the observer can feel the natural presence from multi-view(4-view) 3D video of server system in real-time and also can feel the natural presence from 3D video of client system and finally suggest an application possibility of the proposed web-based real-time 3D video communication in real fields.

Real-Time Image-Based Relighting for Tangible Video Teleconference (실감화상통신을 위한 실시간 재조명 기술)

  • Ryu, Sae-Woon;Parka, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.14 no.6
    • /
    • pp.807-810
    • /
    • 2009
  • This paper deals with a real-time image based relighting system for tangible video teleconference. The proposed image based relighting system renders the extracted human object using the virtual environmental images. The proposed system can homogenize virtually the lighting environments of remote users on the video teleconference, or render the humans like they are in the virtual places. To realize the video teleconference, the paper obtains the 3D object models of users in real-time using the controlled lighting system. In this paper, we use single color camera and synchronized two directional flash lights. Proposed system generates pure shading images using on and off flash images subtraction. One pure shading reflectance map generates a directional normal map from multiplication of each reflectance map and basic normal vector map. Each directional basic normal map is generated by inner vector calculation of incident light vector and camera viewing vector. And the basic normal vector means a basis component of real surface normal vector. The proposed system enables the users to immerse video teleconference just as they are in the virtual environments.