• Title/Summary/Keyword: Video enhancement

Search Result 268, Processing Time 0.024 seconds

Adaptive Enhancement of Low-light Video Images Algorithm Based on Visual Perception (시각 감지 기반의 저조도 영상 이미지 적응 보상 증진 알고리즘)

  • Li Yuan;Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.10 no.2
    • /
    • pp.51-60
    • /
    • 2024
  • Aiming at the problem of low contrast and difficult to recognize video images in low-light environment, we propose an adaptive contrast compensation enhancement algorithm based on human visual perception. First of all, the video image characteristic factors in low-light environment are extracted: AL (average luminance), ABWF (average bandwidth factor), and the mathematical model of human visual CRC(contrast resolution compensation) is established according to the difference of the original image's grayscale/chromaticity level, and the proportion of the three primary colors of the true color is compensated by the integral, respectively. Then, when the degree of compensation is lower than the bright vision precisely distinguishable difference, the compensation threshold is set to linearly compensate the bright vision to the full bandwidth. Finally, the automatic optimization model of the compensation ratio coefficient is established by combining the subjective image quality evaluation and the image characteristic factor. The experimental test results show that the video image adaptive enhancement algorithm has good enhancement effect, good real-time performance, can effectively mine the dark vision information, and can be widely used in different scenes.

Methods for Video Caption Extraction and Extracted Caption Image Enhancement (영화 비디오 자막 추출 및 추출된 자막 이미지 향상 방법)

  • Kim, So-Myung;Kwak, Sang-Shin;Choi, Yeong-Woo;Chung, Kyu-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.235-247
    • /
    • 2002
  • For an efficient indexing and retrieval of digital video data, research on video caption extraction and recognition is required. This paper proposes methods for extracting artificial captions from video data and enhancing their image quality for an accurate Hangul and English character recognition. In the proposed methods, we first find locations of beginning and ending frames of the same caption contents and combine those multiple frames in each group by logical operation to remove background noises. During this process an evaluation is performed for detecting the integrated results with different caption images. After the multiple video frames are integrated, four different image enhancement techniques are applied to the image: resolution enhancement, contrast enhancement, stroke-based binarization, and morphological smoothing operations. By applying these operations to the video frames we can even improve the image quality of phonemes with complex strokes. Finding the beginning and ending locations of the frames with the same caption contents can be effectively used for the digital video indexing and browsing. We have tested the proposed methods with the video caption images containing both Hangul and English characters from cinema, and obtained the improved results of the character recognition.

EFFICIENT MULTIVIEW VIDEO CODING BY OBJECT SEGMENTATION

  • Boonthep, Narasak;Chiracharit, Werapon;Chamnongthai, Kosin;Ho, Yo-Sung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.294-297
    • /
    • 2009
  • Multi-view video consists of a set of multiple video sequences from multiple viewpoints or view directions in the same scene. It contains extremely a large amount of data and some extra information to be stored or transmitted to the user. This paper presents inter-view correlations among video objects and the background to reduce the prediction complexity while achieving a high coding efficiency in multi-view video coding. Our proposed algorism is based on object-based segmentation scheme that utilizes video object information obtained from the coded base view. This set of data help us to predict disparity vectors and motion vectors in enhancement views by employing object registration, which leads to high compression and low-complexity coding scheme for enhancement views. An experimental results show that the superiority can provide an improvement of PSNR gain 2.5.3 dB compared to the simulcast.

  • PDF

Efficient Transmission of Scalable Video Streams Using Dual-Channel Structure (듀얼 채널 구조를 이용한 Scalable 비디오(SVC)의 전송 성능 향상)

  • Yoo, Homin;Lee, Jaemyoun;Park, Juyoung;Han, Sanghwa;Kang, Kyungtae
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.9
    • /
    • pp.381-392
    • /
    • 2013
  • During the last decade, the multitude of advances attained in terminal computers, along with the introduction of mobile hand-held devices, and the deployment of high speed networks have led to a recent surge of interest in Quality of Service (QoS) for video applications. The main difficulty is that mobile devices experience disparate channel conditions, which results in different rates and patterns of packet loss. One way of making more efficient use of network resources in video services over wireless channels with heterogeneous characteristics to heterogeneous types of mobile device is to use a scalable video coding (SVC). An SVC divides a video stream into a base layer and a single or multiple enhancement layers. We have to ensure that the base layer of the video stream is successfully received and decoded by the subscribers, because it provides the basis for the subsequent decoding of the enhancement layer(s). At the same time, a system should be designed so that the enhancement layer(s) can be successfully decoded by as many users as possible, so that the average QoS is as high as possible. To accommodate these characteristics, we propose an efficient transmission scheme which incorporates SVC-aware dual-channel repetition to improve the perceived quality of services. We repeat the base-layer data over two channels, with different characteristics, to exploit transmission diversity. On the other hand, those channels are utilized to increase the data rate of enhancement layer data. This arrangement reduces service disruption under poor channel conditions by protecting the data that is more important to video decoding. Simulations show that our scheme safeguards the important packets and improves perceived video quality at a mobile device.

Robust Scalable Video Transmission using Adaptive Multiple Reference Motion Compensated Prediction (적응 다중 참조 이동 보상을 이용한 에러에 강인한 스케일러블 동영상 전송 기법)

  • 김용관;김승환;이상욱
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.3C
    • /
    • pp.408-418
    • /
    • 2004
  • In this paper, we propose a novel scalable video coding algorithm based on adaptively weighted multiple reference frame method. To improve the coding efficiency in the enhancement layer, the enhancement frame is predicted by the sum of adaptively weighted double motion compensated frames in the enhancement layer and the current frame in the base layer, according to the input video characteristics. By employing adaptive reference selection scheme at the decoder, the proposed method reduce the drift problem significantly. From the experimental results, the proposed algorithm shows more than 1.0 ㏈ PSNR improvement, compared with the conventional scalable H.263+ for various packet loss rate channel conditions.

A Practical RTP Packetization Scheme for SVC Video Transport over IP Networks

  • Seo, Kwang-Deok;Kim, Jin-Soo;Jung, Soon-Heung;Yoo, Jeong-Ju
    • ETRI Journal
    • /
    • v.32 no.2
    • /
    • pp.281-291
    • /
    • 2010
  • Scalable video coding (SVC) has been standardized as an extension of the H.264/AVC standard. This paper proposes a practical real-time transport protocol (RTP) packetization scheme to transport SVC video over IP networks. In combined scalability of SVC, a coded picture of a base or scalable enhancement layer is produced as one or more video layers consisting of network abstraction layer (NAL) units. The SVC NAL unit header contains a (DID, TID, QID) field to identify the association of each SVC NAL unit with its scalable enhancement layer without parsing the payload part of the SVC NAL unit. In this paper, we utilize the (DID, TID, QID) information to derive hierarchical spatio-temporal relationship of the SVC NAL units. Based on the derivation using the (DID, TID, QID) field, we propose a practical RTP packetization scheme for generating single RTP sessions in unicast and multicast transport of SVC video. The experimental results indicate that the proposed packetization scheme can be efficiently applied to transport SVC video over IP networks with little induced delay, jitter, and computational load.

SUPER RESOLUTION RECONSTRUCTION FROM IMAGE SEQUENCE

  • Park Jae-Min;Kim Byung-Guk
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.197-200
    • /
    • 2005
  • Super resolution image reconstruction method refers to image processing algorithms that produce a high resolution(HR) image from observed several low resolution(LR) images of the same scene. This method is proved to be useful in many practical cases where multiple frames of the same scene can be obtained, such as satellite imaging, video surveillance, video enhancement and restoration, digital mosaicking, and medical imaging. In this paper we applied super resolution reconstruction method in spatial domain to video sequences. Test images are adjacently sampled images from continuous video sequences and overlapped for high rate. We constructed the observation model between the HR images and LR images applied by the Maximum A Posteriori(MAP) reconstruction method that is one of the major methods in the super resolution grid construction. Based on this method, we reconstructed high resolution images from low resolution images and compared the results with those from other known interpolation methods.

  • PDF

ViVa: Mobile Video Quality Enhancement System Based on Cloud Offloading (ViVa: 클라우드 오프로딩 기반의 모바일 영상 품질 향상)

  • Jo, Bokyun;Suh, Doug Young
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.292-298
    • /
    • 2019
  • In this paper, we show how to provide high quality image service using cloud server and image quality enhancement algorithm. In other words, based on the concept of ViVa (Video Value Addition) proposed in the paper, we propose an improved system compared to the existing streaming service by providing a high-quality video with the transmission bit rate and calculation amount necessary to serve low-quality images.

Selection of Scalable Video Coding Layer Considering the Required Peak Signal to Noise Ratio and Amount of Received Video Data in Wireless Networks (무선 네트워크에서 요구되는 평균 최대 신호 대 잡음비와 수신 비디오 데이터양을 고려하는 스케일러블 비디오 코딩 계층 선택)

  • Lee, Hyun-No;Kim, Dong-Hoi
    • Journal of Digital Contents Society
    • /
    • v.17 no.2
    • /
    • pp.89-96
    • /
    • 2016
  • SVC(Scalable Video Coding), which is one form among video encoding technologies, makes video streaming with the various frame rate, resolution, and video quality by combining three different scalability dimensions: temporal, spatial, and video quality scalability. As the above SVC-encoded video streaming consists of one base layer and several enhancement layers, and a wireless AP(Access Point) chooses and sends a suitable layer according to the received power from the receiving terminals in the changeable wireless network environment, the receiving terminals supporting SVC are able to receive video streaming with the appropriate resolution and quality according to their received powers. In this paper, after the performance analysis for the received power, packet loss rate, PSNR(Required Peak Signal to Noise Ratio), video quality level and amount of received video data based on the number of SVC layers was performed, an efficient method for selecting the number of SVC layer satisfying the RSNR and minimizing the amount of received video data is proposed.

Applied Video Statistics

  • Beek, W.H.M. Van;Cordes, C.N.;Raman, N.
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2005.07b
    • /
    • pp.1584-1587
    • /
    • 2005
  • Although the picture quality of today's displays is very good already, a continuous improvement is desirable as the new larger display sizes increase the visibility of artifacts. A contributing factor for picture quality enhancement through smart video processing and algorithm design is the information gathered from video statistics. Interesting parameters gathered from video statistics are e.g. the image- and display load, the usage of the color gamut, the estimated power consumption and the occurrence of static image parts. Examples of applications that can benefit from video statistics are power calculations, color gamut mapping algorithms, dynamic backlight control for LCD panels and LED backlights for LCD panels.

  • PDF