• Title/Summary/Keyword: Illumination Invariant

Search Result 74, Processing Time 0.026 seconds

An Improved Saliency Detection for Different Light Conditions

  • Ren, Yongfeng;Zhou, Jingbo;Wang, Zhijian;Yan, Yunyang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.1155-1172
    • /
    • 2015
  • In this paper, we propose a novel saliency detection framework based on illumination invariant features to improve the accuracy of the saliency detection under the different light conditions. The proposed algorithm is divided into three steps. First, we extract the illuminant invariant features to reduce the effect of the illumination based on the local sensitive histograms. Second, a preliminary saliency map is obtained in the CIE Lab color space. Last, we use the region growing method to fuse the illuminant invariant features and the preliminary saliency map into a new framework. In addition, we integrate the information of spatial distinctness since the saliency objects are usually compact. The experiments on the benchmark dataset show that the proposed saliency detection framework outperforms the state-of-the-art algorithms in terms of different illuminants in the images.

A Robust Hybrid Method for Face Recognition Under Illumination Variation (조명 변이에 강인한 하이브리드 얼굴 인식 방법)

  • Choi, Sang-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.10
    • /
    • pp.129-136
    • /
    • 2015
  • We propose a hybrid face recognition to deal with illumination variation. For this, we extract discriminant features by using the different illumination invariant feature extraction methods. In order to utilize both advantages of each method, we evaluate the discriminant power of each feature by using the discriminant distance and then construct a composite feature with only the features that contain a large amount of discriminative information. The experimental results for the Multi-PIE, Yale B, AR and yale databases show that the proposed method outperforms an individual illumination invariant feature extraction method for all the databases.

Shadow and Highlight Invariant Color Models

  • Lee, Ja-Yong;Kang, Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2005.11a
    • /
    • pp.557-560
    • /
    • 2005
  • The color of objects varies with changes in illuminant color and viewing conditions. As a consequence, color boundaries are influenced by a large variety of imaging variables such as shadows, highlights, illumination, and material changes. Therefore, invariant color models are useful for a large number of applications such as object recognitions, detections, and segmentations. In this paper, we propose invariant color models. These color models are independent of the object geometry, object pose, and illumination. From these color models, color invariant edges are derived. To show the validity of the proposed invariant color models, some examples are given.

  • PDF

Robust HDR Video Synthesis Using Illumination Invariant Descriptor (밝기 변화에 강인한 특징 기술자를 이용한 고품질 HDR 동영상 합성)

  • Vo Van, Tu;Lee, Chul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.83-84
    • /
    • 2017
  • We propose a novel high dynamic range (HDR) video synthesis algorithm from alternatively exposed low dynamic range (LDR) videos. We first estimate correspondences between input fames using an illumination invariant descriptor. Then, we synthesize an HDR frame with the weights computed to maximize detail preservation in the output HDR frame. Experimental results demonstrate that the proposed algorithm provides high-quality HDR videos without noticeable artifacts.

  • PDF

Face detection in compressed domain using color balancing for various illumination conditions (다양한 조명 환경에서의 실시간 사용자 검출을 위한 압축 영역에서의 색상 조절을 사용한 얼굴 검출 방법)

  • Min, Hyun-Seok;Lee, Young-Bok;Shin, Ho-Chul;Lim, Eul-Gyoon;Ro, Yong-Man
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.140-145
    • /
    • 2009
  • Significant attention has recently been drawn to human robot interaction system that uses face detection technology. The most conventional face detection methods have applied under pixel domain. These pixel based face detection methods require high computational power. Hence, the conventional methods do not satisfy the robot environment that requires robot to operate in a limited computing process and saving space. Also, compensating the variation of illumination is important and necessary for reliable face detection. In this paper, we propose the illumination invariant face detection that is performed under the compressed domain. The proposed method uses color balancing module to compensate illumination variation. Experiments show that the proposed face detection method can effectively increase the face detection rate under existing illumination.

  • PDF

Illumination and Rotation Invariant Object Recognition (조명 영향 및 회전에 강인한 물체 인식)

  • Kim, Kye-Kyung;Kim, Jae-Hong;Lee, Jae-Yun
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.11
    • /
    • pp.1-8
    • /
    • 2012
  • The application of object recognition technology has been increased with a growing need to introduce automated system in industry. However, object transformed by noises and shadows appeared from illumination causes challenge problem in object detection and recognition. In this paper, an illumination invariant object detection using a DoG filter and adaptive threshold is proposed that reduces noises and shadows effects and reserves geometry features of object. And also, rotation invariant object recognition is proposed that has trained with neural network using classes categorized by object type and rotation angle. The simulation has been processed to evaluate feasibility of the proposed method that shows the accuracy of 99.86% and the matching speed of 0.03 seconds on ETRI database, which has 16,848 object images that has obtained in various lighting environment.

Illumination Invariant Face Tracking on Smart Phones using Skin Locus based CAMSHIFT

  • Bui, Hoang Nam;Kim, SooHyung;Na, In Seop
    • Smart Media Journal
    • /
    • v.2 no.4
    • /
    • pp.9-19
    • /
    • 2013
  • This paper gives a review on three illumination issues of face tracking on smart phones: dark scenes, sudden lighting change and backlit effect. First, we propose a fast and robust face tracking method utilizing continuous adaptive mean shift algorithm (CAMSHIFT) and CbCr skin locus. Initially, the skin locus obtained from training video data. After that, a modified CAMSHIFT version based on the skin locus is accordingly provided. Second, we suggest an enhancement method to increase the chance of detecting faces, an important initialization step for face tracking, under dark illumination. The proposed method works comparably with traditional CAMSHIFT or particle filter, and outperforms these methods when dealing with our public video data with the three illumination issues mentioned above.

  • PDF

Invariant-Feature Based Object Tracking Using Discrete Dynamic Swarm Optimization

  • Kang, Kyuchang;Bae, Changseok;Moon, Jinyoung;Park, Jongyoul;Chung, Yuk Ying;Sha, Feng;Zhao, Ximeng
    • ETRI Journal
    • /
    • v.39 no.2
    • /
    • pp.151-162
    • /
    • 2017
  • With the remarkable growth in rich media in recent years, people are increasingly exposed to visual information from the environment. Visual information continues to play a vital role in rich media because people's real interests lie in dynamic information. This paper proposes a novel discrete dynamic swarm optimization (DDSO) algorithm for video object tracking using invariant features. The proposed approach is designed to track objects more robustly than other traditional algorithms in terms of illumination changes, background noise, and occlusions. DDSO is integrated with a matching procedure to eliminate inappropriate feature points geographically. The proposed novel fitness function can aid in excluding the influence of some noisy mismatched feature points. The test results showed that our approach can overcome changes in illumination, background noise, and occlusions more effectively than other traditional methods, including color-tracking and invariant feature-tracking methods.

Extended SURF Algorithm with Color Invariant Feature and Global Feature (컬러 불변 특징과 광역 특징을 갖는 확장 SURF(Speeded Up Robust Features) 알고리즘)

  • Yoon, Hyun-Sup;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.58-67
    • /
    • 2009
  • A correspondence matching is one of the important tasks in computer vision, and it is not easy to find corresponding points in variable environment where a scale, rotation, view point and illumination are changed. A SURF(Speeded Up Robust Features) algorithm have been widely used to solve the problem of the correspondence matching because it is faster than SIFT(Scale Invariant Feature Transform) with closely maintaining the matching performance. However, because SURF considers only gray image and local geometric information, it is difficult to match corresponding points on the image where similar local patterns are scattered. In order to solve this problem, this paper proposes an extended SURF algorithm that uses the invariant color and global geometric information. The proposed algorithm can improves the matching performance since the color information and global geometric information is used to discriminate similar patterns. In this paper, the superiority of the proposed algorithm is proved by experiments that it is compared with conventional methods on the image where an illumination and a view point are changed and similar patterns exist.

An Illumination Invariant Traffic Sign Recognition in the Driving Environment for Intelligence Vehicles (지능형 자동차를 위한 조명 변화에 강인한 도로표지판 검출 및 인식)

  • Lee, Taewoo;Lim, Kwangyong;Bae, Guntae;Byun, Hyeran;Choi, Yeongwoo
    • Journal of KIISE
    • /
    • v.42 no.2
    • /
    • pp.203-212
    • /
    • 2015
  • This paper proposes a traffic sign recognition method in real road environments. The video stream in driving environments has two different characteristics compared to a general object video stream. First, the number of traffic sign types is limited and their shapes are mostly simple. Second, the camera cannot take clear pictures in the road scenes since there are many illumination changes and weather conditions are continuously changing. In this paper, we improve a modified census transform(MCT) to extract features effectively from the road scenes that have many illumination changes. The extracted features are collected by histograms and are transformed by the dense descriptors into very high dimensional vectors. Then, the high dimensional descriptors are encoded into a low dimensional feature vector by Fisher-vector coding and Gaussian Mixture Model. The proposed method shows illumination invariant detection and recognition, and the performance is sufficient to detect and recognize traffic signs in real-time with high accuracy.