• Title/Summary/Keyword: performance video

Search Result 2,476, Processing Time 0.039 seconds

Design and Analysis of 3D Scalable Video Codec (3차원 스케일러블 비디오 코덱 설계 및 성능 분석)

  • Lee, Jae-Yung;Kim, Jae-Gon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.21 no.2
    • /
    • pp.219-236
    • /
    • 2016
  • In this paper, we design and implement a 3D scalable video codec by combining the Scalable HEVC (SHVC) and the 3D-HEVC which are the extended standards of High Efficiency Video Coding (HEVC). The proposed 3D scalable video codec supports the view and spatial scalabilities which are the properties of 3D-HEVC and SHVC, respectively. In the proposed 3D scalable codec, the high-level syntaxes are designed to support the multiple scalabilities. In the computer simulation section, we confirmed the conformance of the proposed codec and analyzed the performance of the proposed codec.

Design and Evaluation of Data Input/output for Video Conference System (화상회의 시스템에서의 데이터 입출력 설계 및 평가)

  • 김현기
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.38-44
    • /
    • 2003
  • In this paper, we propose the method in which multimedia data simultaneously transfers to the main memory and the multimedia processor from the network interface card to improve bottleneck of system bus through analysis for architecture of video conference system and input/output model. The proposed method can reduce the number of system bus accesses, bus cycles, data transmission time and compression ratio of video data in the video conference system. We compared the performance between the proposed method and the conventional methods in the multi-party video conference systems. The simulation results showed that the proposed method was reduced the transmission time of multimedia data than the conventional method.

  • PDF

Implementation and Performance Measurement of Personal Media Gateway for Applications over BcN Networks (BcN용 미디어 프로세서형 단말(PMG)의 구현 및 성능시험)

  • Jang, Seong-Hwan;Yang, Soo-Kyung;Cha, Young;Choi, Woo-Suk;Son, Seok-Bae;Kim, Jung-Joon
    • 한국정보통신설비학회:학술대회논문집
    • /
    • 2005.08a
    • /
    • pp.329-332
    • /
    • 2005
  • In this paper, we describe implementation of personal media gateway (PMG) for applications over BcN networks. PMG is a TV based set-top terminal, which enables transmission of Full D1 high quality video and audio at the speed of maximum 2Mbps. It supports SIP protocol and QoS for the BcN networks. The hardware of the PMG consists of host module, audio/video codec processing module, DTMF module, and remote control I/O module. H.263 and MPEG4 software are implemented in DSP as codec for hi-directional communication and streaming, respectively. G.711 and Ogg-Vorbis are implemented as audio codec. We examined the quality of video using the Video Quality Test Equpment, which was developed by KT Convergence Lab. The experimental results show the video quality of MOS 4.1 and audio quality of MOS 4.3. We expect that PMG will be prospective business models, and create new customer value.

  • PDF

Video Surveillance System Design and Realization with Interframe Probability Distribution Analyzation (인터프레임 확률분포분석에 의한 비디오 감시 시스템 설계 구현)

  • Ryu, Kwang-Ryol;Kim, Ja-Hwan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.6
    • /
    • pp.1064-1069
    • /
    • 2008
  • A system design and realization for video surveillance with interframe probability distribution analyzation is presented in this paper. The system design is based on a high performance DSP professor, video surveillance is implemented by analyzing interframe probability distribution using trivariate normal distribution(weight, mean, variance) for scanning objects in a restricted area and the video analysis algorithm is decided for forming a different image from the probability distribution of several frame compressed by the standardized JPEG. The system processing time of D1$(720{\times}480)$ image per frame is 85ms and enables to process the system at 12 frames per second. An object surveillance about the restricted area by rules is extracted to 100% unless object is moved faster.

A Video Expression Recognition Method Based on Multi-mode Convolution Neural Network and Multiplicative Feature Fusion

  • Ren, Qun
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.556-570
    • /
    • 2021
  • The existing video expression recognition methods mainly focus on the spatial feature extraction of video expression images, but tend to ignore the dynamic features of video sequences. To solve this problem, a multi-mode convolution neural network method is proposed to effectively improve the performance of facial expression recognition in video. Firstly, OpenFace 2.0 is used to detect face images in video, and two deep convolution neural networks are used to extract spatiotemporal expression features. Furthermore, spatial convolution neural network is used to extract the spatial information features of each static expression image, and the dynamic information feature is extracted from the optical flow information of multiple expression images based on temporal convolution neural network. Then, the spatiotemporal features learned by the two deep convolution neural networks are fused by multiplication. Finally, the fused features are input into support vector machine to realize the facial expression classification. Experimental results show that the recognition accuracy of the proposed method can reach 64.57% and 60.89%, respectively on RML and Baum-ls datasets. It is better than that of other contrast methods.

Effect of Video Education on the Moment of Hand Hygiene among Nursing student in Clinical Practicum. (임상실습 중인 간호학생의 손 위생 시점 동영상 실습교육의 효과)

  • Choi, Hye-Kyung;Ju, Youn-sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.6
    • /
    • pp.526-535
    • /
    • 2018
  • Purpose: This study conducted a non-equivalent pre-post-test of quasi-experimental design that applied a video program about hand hygiene to nursing students to verify its effects on knowledge and performance of hand hygiene in clinical practicum. Methods: Data were collected from 92 students in three nursing colleges (44 in the experimental group and 48 in the control group) from 2 March to 30 June, 2017. Results: The results of the hypothesis tests were as follow. 1) Before and after the hand hygiene video program education, hand hygiene knowledge differed between the experimental group and the control group (t=6.30, p<0.001). 2) After the hand hygiene moment video program education, hand hygiene knowledge was higher in the experimental group than in the control group(t=6.34, p <0.001). 3) After the hand hygiene moment video program education, hand hygiene performance was higher in the experimental group than in the control group(t=3.82, p<0.001). 4). Knowledge and performance of hand hygiene moment are correlated(r=0.458, p <0.001). Conclusion: The hand hygiene moment education program may enhance hand hygiene knowledge and hand hygiene performance.

Removal of Complexity Management in H.263 Codec for A/VDelivery Systems

  • Jalal, Ahmad;Kim, Sang-Wook
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.931-936
    • /
    • 2006
  • This paper presents different issues of the real-time compression algorithms without compromising the video quality in the distributed environment. The theme of this research is to manage the critical processing stages (speed, information lost, redundancy, distortion) having better encoded ratio, without the fluctuation of quantization scale by using IP configuration. In this paper, different techniques such as distortion measure with searching method cover the block phenomenon with motion estimation process while passing technique and floating measurement is configured by discrete cosine transform (DCT) to reduce computational complexity which is implemented in this video codec. While delay of bits in encoded buffer side especially in real-time state is being controlled to produce the video with high quality and maintenance a low buffering delay. Our results show the performance accuracy gain with better achievement in all the above processes in an encouraging mode.

  • PDF

Computation Controllable Mode Decision and Motion Estimation for Scalable Video Coding

  • Zheng, Liang-Wei;Li, Gwo-Long;Chen, Mei-Juan;Yeh, Chia-Hung;Tai, Kuang-Han;Wu, Jian-Sheng
    • ETRI Journal
    • /
    • v.35 no.3
    • /
    • pp.469-479
    • /
    • 2013
  • This paper proposes an efficient computation-aware mode decision and search point (SP) allocation algorithm for spatial and quality scalabilities in Scalable Video Coding. In our proposal, a linear model is derived to allocate the computation for macroblocks in enhancement layers by using the rate distortion costs of the base layer. In addition, an adaptive SP decision algorithm is proposed to decide the number of SPs for motion estimation under the constraint of the allocated computation. Experiment results demonstrate that the proposed algorithm allocates the computation resource efficiently and outperforms other works in rate distortion performance under the same computational availability constraint.

Content-Based Video Retrieval System Using Color and Motion Features (색상과 움직임 정보를 이용한 내용기반 동영상 검색 시스템)

  • 김소희;김형준;정연구;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.133-136
    • /
    • 2001
  • Numerous challenges have been made to retrieve video using the contents. Recently MPEG-7 had set up a set of visual descriptors for such purpose of searching and retrieving multimedia data. Among them, color and motion descriptors are employed to develop a content-based video retrieval system to search for videos that have similar characteristics in terms of color and motion features of the video sequence. In this paper, the performance of the proposed system is analyzed and evaluated. Experimental results indicate that the processing time required for a retrieval using MPEG-7 descriptors is relatively short at the expense of the retrieval accuracy.

  • PDF

Robust Real-time Detection of Abandoned Objects using a Dual Background Model

  • Park, Hyeseung;Park, Seungchul;Joo, Youngbok
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.2
    • /
    • pp.771-788
    • /
    • 2020
  • Detection of abandoned objects for smart video surveillance should be robust and accurate in various situations with low computational costs. This paper presents a new algorithm for abandoned object detection based on the dual background model. Through the template registration of a candidate stationary object and presence authentication methods presented in this paper, we can handle some complex cases such as occlusions, illumination changes, long-term abandonment, and owner's re-attendance as well as general detection of abandoned objects. The proposed algorithm also analyzes video frames at specific intervals rather than consecutive video frames to reduce the computational overhead. For performance evaluation, we experimented with the algorithm using the well-known PETS2006, ABODA datasets, and our video dataset in a live streaming environment, which shows that the proposed algorithm works well in various situations.