• Title/Summary/Keyword: illumination Compensation

Search Result 61, Processing Time 0.017 seconds

Comparative Study on Illumination Compensation Performance of Retinex model and Illumination-Reflectance model (레티넥스 모델과 조명-반사율 모델의 조명 보상 성능 비교 연구)

  • Chung, Jin-Yun;Yang, Hyun-Seung
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.11
    • /
    • pp.936-941
    • /
    • 2006
  • To apply object recognition techniques to real environment, illumination compensation method should be developed. As effective illumination compensation model, we focused our attention on Retinex model and illumination-Reflectance model, implemented them, and experimented on their performance. We implemented Retinex model with Single Scale Retinex, Multi-Scale Retinex, and Retinex Neural Network and Multi-Scale Retinex Neural Network, neural network model of Retinex model. Also, we implemented illumination-Reflectance model with reflectance image calculation by calculating an illumination image by low frequency filtering in frequency domain of Discrete Cosine Transform and Wavelet Transform, and Gaussian blurring. We compare their illumination compensation performance to facial images under nine illumination directions. We also compare their performance after post processing using Principal Component Analysis(PCA). As a result, illumination Reflectance model showed better performance and their overall performance was improved when illumination compensated images were post processed by PCA.

Color and Illumination Compensation Algorithm for 360 VR Panorama Image (360 VR 기반 파노라마 영상 구성을 위한 칼라 및 밝기 보상 알고리즘)

  • Nam, Da-yoon;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.1
    • /
    • pp.3-24
    • /
    • 2019
  • Techniques related to 360 VR service have been developed to improve the quality of the stitched image and video, where illumination compensation scheme is one of the important tools. Among the conventional illumination compensation algorithms, Gain-based compensation and Block Gain-based compensation algorithms have shown the outstanding performances in the process of making panorama picture. However, those are ineffective in the 360 VR service, because the disparity between illuminations of the multiple pictures in 360 VR is much more than that in making the panorama picture. In addition, the number of the pictures to be stitched in 360 VR system is more than that in the conventional panorama image system. Thus, we propose a preprocessing tool to enhance the illumination compensation algorithm so that the method reduces the degradation in the stitched picture of 360 VR systems. The proposed algorithm consists of 'color compensation' and 'illumination compensation'. The simulation results show that the proposed technique improve the conventional techniques without additional complexity.

Distributed Video Coding for Illumination Compensation of Multi-view Video

  • Park, Sean-Ae;Sim, Dong-Gyu;Jeon, Byeung-Woo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.6
    • /
    • pp.1222-1236
    • /
    • 2010
  • In this paper, we propose an improved distributed multi-view video coding method that is robust to illumination changes among different views. The use of view dependency is not effective for multi-view video because each view has different intrinsic and extrinsic camera parameters. In this paper, a modified distributed multi-view coding method is presented that applies illumination compensation when generating side information. The proposed encoder codes DC values of discrete cosine transform (DCT) coefficients separately by entropy coding. The proposed decoder can generate more accurate side information by using the transmitted DC coefficients to compensate for illumination changes. Furthermore, AC coefficients are coded with conventional entropy or channel coders depending on the frequency band. We found that the proposed algorithm is about 0.1~0.5 dB better than conventional algorithms.

A Deblocking Filtering Method for Illumination Compensation in Multiview Video Coding (다시점 비디오 코딩에서 휘도 보상 방법에 적합한 디블록킹 필터링 방법)

  • Park, Min-Woo;Park, Gwang-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.13 no.3
    • /
    • pp.401-410
    • /
    • 2008
  • Multiview Video Coding contains a macroblock-based illumination compensation tool which can compensate the variations of illuminations according to view or temporal directions. Thanks to illumination compensation tool, the coding efficiency of Multiview Video Coding has been enhanced. However illumination compensation tool also generates additional subjective drawbacks of the blocking artifacts due to macroblock-based compensations of mean values. A deblocking filtering method for Multiview Video Coding which is the same as in H.264/AVC does not consider illumination difference between the illumination compensated blocks, thus it can not effectively eliminate the blocking artifacts. Therefore, this paper analyzes the phenomena of blocking artifacts caused by illumination compensation and proposes a method which can effectively eliminate the blocking artifacts with the minimum changes of the H.264 deblockding filtering method. In the simulation results, it can be easily found the blocking artifacts are clearly eliminated in the subjective comparisons, and the average bit-rate reduction is up to 1.44%.

Illumination Compensation Algorithm based on Segmentation with Depth Information for Multi-view Image (깊이 정보를 이용한 영역분할 기반의 다시점 영상 조명보상 기법)

  • Kang, Keunho;Ko, Min Soo;Yoo, Jisang
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.4
    • /
    • pp.935-944
    • /
    • 2013
  • In this paper, a new illumination compensation algorithm by segmentation with depth information is proposed to improve the coding efficiency of multi-view images. In the proposed algorithm, a reference image is first segmented into several layers where each layer is composed of objects with a similar depth value. Then we separate objects from each other even in the same layer by labeling each separate region in the layered image. Then, the labeled reference depth image is converted to the position of the distortion image view by using 3D warping algorithm. Finally, we apply an illumination compensation algorithm to each of matched regions in the converted reference view and distorted view. The occlusion regions that occur by 3D warping are also compensated by a global compensation method. Through experimental results, we are able to confirm that the proposed algorithm has better performance to improve coding efficiency.

Distributed Multi-view Video Coding Based on Illumination Compensation (조명보상 기반 분산 다시점 비디오 코딩)

  • Park, Sea-Nae;Sim, Dong-Gyu;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.17-26
    • /
    • 2008
  • In this paper, we propose a distributed multi-view video coding method employing illumination compensation for multi-view video coding. Distributed multi-view video coding (DMVC) methods can be classified either into a temporal or an inter-view interpolation-based ones according to ways to generate side information. DMVC with inter-view interpolation utilizes characteristics of multi-view videos to improve coding efficiency of the DMVC by using side information based on the inter-view interpolation. However, mismatch of camera parameters and illumination change between two views could bring about inaccurate side information generation. In this paper, a modified distributed multi-view coding method is presented by applying illumination compensation in generating the side information. In the proposed encoder system, in addition to parity bits for AC coefficients, DC coefficients are transmitted as well to the decoder side. By doing so, the decoder can generate more accurate side information by compensating illumination changes with the transmitted DC coefficients. We found that the proposed algorithm is $0.1{\sim}0.2\;dB$ better than the conventional algorithm that does not make use of illumination compensation.

Omni-directional Image Generation Algorithm with Parametric Image Compensation (변수화된 영상 보정을 통한 전방향 영상 생성 방법)

  • Kim, Yu-Na;Sim, Dong-Gyu
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.396-406
    • /
    • 2006
  • This paper proposes an omni-directional image generation algorithm with parametric image compensation. The algorithm generates an omni-directional image by transforming each planar image to the spherical image on spherical coordinate. Parametric image compensation method is presented in order to compensate vignetting and illumination distortions caused by properties of a camera system and lighting condition. The proposed algorithm can generates realistic and seamless omni-directional video and synthesize any point of view from the stitched omni-directional image on the spherical image. Experimental results show that the proposed omni-directional system with vignetting and illumination compensation is approximately $1{\sim}4dB$ better than that which does not consider the said effects.

Real-time and reconfiguable hardware filler for face recognition (얼굴 인식을 위한 실시간 재구성형 하드웨어 필터)

  • 송민규;송승민;동성수;이종호;이필규
    • Proceedings of the IEEK Conference
    • /
    • 2003.07c
    • /
    • pp.2645-2648
    • /
    • 2003
  • In this paper, real-time and reconfiguable hardware filter for face recognition is proposed and implemented on FPGA chip using verilog-HDL. In general, face recognition is considerably difficult because it is influenced by noises or the variation of illumination. Some of the commonly used filters such s histogram equalization filter, contrast stretching filter for image enhancement and illumination compensation filter are proposed for realizing more effective illumination compensation. The filter proposed in this paper was designed and verified by debugging and simulating on hardware. Experimental results show that the proposed filter system can generate selective set of real-time reconfiguable hardware filters suitable for face recognition in various situation.

  • PDF

Improved Motion Compensation Using Adjacent Pixels (인접 화소를 이용한 개선된 움직임 보상)

  • Seo, Jeong-Hoon;Kim, Jeong-Pil;Lee, Yung-Lyul
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.280-289
    • /
    • 2010
  • The H.264/AVC standard uses efficient inter prediction technologies improving the coding efficiency by reducing temporal redundancy between images. However, since H.264/AVC does not efficiently encode a video sequence that occurs a local illumination change, the coding efficiency of H.264/AVC is decreased when a local illumination change happens in video. In this paper, we propose an improved motion compensation using adjacent pixels and motion vector refinement to efficiently encode local illumination changes. The proposed method always improves the BD-PSNR (Bj$\o$ntegaard Delta Peak Signal-to-Noise Ratio) of 0.01 ~ 0.21 dB compared with H.264/AVC.

Low-complexity Local Illuminance Compensation for Bi-prediction mode (양방향 예측 모드를 위한 저복잡도 LIC 방법 연구)

  • Choi, Han Sol;Byeon, Joo Hyung;Bang, Gun;Sim, Dong Gyu
    • Journal of Broadcast Engineering
    • /
    • v.24 no.3
    • /
    • pp.463-471
    • /
    • 2019
  • This paper proposes a method for reducing the complexity of LIC (Local Illuminance Compensation) for bi-directional inter prediction. The LIC performs local illumination compensation using neighboring reconstruction samples of the current block and the reference block to improve the accuracy of the inter prediction. Since the weight and offset required for local illumination compensation are calculated at both sides of the encoder and decoder using the reconstructed samples, there is an advantage that the coding efficiency is improved without signaling any information. Since the weight and the offset are obtained in the encoding prediction step and the decoding step, encoder and decoder complexity are increased. This paper proposes two methods for low complexity LIC. The first method is a method of applying illumination compensation with offset only in bi-directional prediction, and the second is a method of applying LIC after weighted average step of reference block obtained by bidirectional prediction. To evaluate the performance of the proposed method, BD-rate is compared with BMS-2.0.1 using B, C, and D classes of MPEG standard experimental image under RA (Random Access) condition. Experimental results show that the proposed method reduces the average of 0.29%, 0.23%, 0.04% for Y, U, and V in terms of BD-rate performance compared to BMS-2.0.1 and encoding/decoding time is almost same. Although the BD-rate was lost, the calculation complexity of the LIC was greatly reduced as the multiplication operation was removed and the addition operation was halved in the LIC parameter derivation process.