• Title/Summary/Keyword: Illumination variance

Search Result 32, Processing Time 0.025 seconds

Study on The Confidence Level of PCA-based Face Recognition Under Variable illumination Condition (조명 변화 환경에서 PCA 기반 얼굴인식 알고리즘의 신뢰도에 대한 연구)

  • Cho, Hyun-Jong;Kang, Min-Koo;Moon, Seung-Bin
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.46 no.2
    • /
    • pp.19-26
    • /
    • 2009
  • This paper studies on the recognition rate change with respect to illumination variance and the confidence level of PCA(Principal Component Analysis) based face recognition by measuring the cumulative match score of CMC(Cumulative Match Characteristic). We studied on the confidence level of the algorithm under illumination changes and selection of training images not only by testing multiple training images per person with illumination variance and single training image and but also by changing the illumination conditions of testing images. The experiment shows that the recognition rate drops for multiple training image case compared to single training image case. We, however, confirmed the confidence level of the algorithm under illumination variance by the fact that the training image which corresponds to the identity of testing image belongs to upper similarity lists regardless of illumination changes and the number of training images.

A Study on Application of Illumination Models for Color Constancy of Objects (객체의 색상 항등성을 위한 조명 모델 응용에 관한 연구)

  • Park, Changmin
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.13 no.1
    • /
    • pp.125-133
    • /
    • 2017
  • Color in an image is determined by illuminant and surface reflectance. So, to recover unique color of object, estimation of exact illuminant is needed. In this study, the illumination models suggested to get the object color constancy with the physical illumination model based on physical phenomena. Their characteristics and application limits are presented and the necessity of an extended illumination model is suggested to get more appropriate object colors recovered. The extended illumination model should contain an additional term for the ambient light in order to account for spatial variance of illumination in object images. Its necessity is verified through an experiment under simple lighting environment in this study. Finally, a reconstruction method for recovering input images under standard white light illumination is experimented and an useful method for computing object color reflectivity is suggested and experimented which can be induced from combination of the existing illumination models.

Image Contrast Enhancement by Illumination Change Detection (조명 변화 감지에 의한 영상 콘트라스트 개선)

  • Odgerel, Bayanmunkh;Lee, Chang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • There are many image processing based algorithms and applications that fail when illumination change occurs. Therefore, the illumination change has to be detected then the illumination change occurred images need to be enhanced in order to keep the appropriate algorithm processing in a reality. In this paper, a new method for detecting illumination changes efficiently in a real time by using local region information and fuzzy logic is introduced. The effective way for detecting illumination changes in lighting area and the edge of the area was selected to analyze the mean and variance of the histogram of each area and to reflect the changing trends on previous frame's mean and variance for each area of the histogram. The ways are used as an input. The changes of mean and variance make different patterns w hen illumination change occurs. Fuzzy rules were defined based on the patterns of the input for detecting illumination changes. Proposed method was tested with different dataset through the evaluation metrics; in particular, the specificity, recall and precision showed high rates. An automatic parameter selection method was proposed for contrast limited adaptive histogram equalization method by using entropy of image through adaptive neural fuzzy inference system. The results showed that the contrast of images could be enhanced. The proposed algorithm is robust to detect global illumination change, and it is also computationally efficient in real applications.

A Study on Analysis of Variant Factors of Recognition Performance for Lip-reading at Dynamic Environment (동적 환경에서의 립리딩 인식성능저하 요인분석에 대한 연구)

  • 신도성;김진영;이주헌
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.5
    • /
    • pp.471-477
    • /
    • 2002
  • Recently, lip-reading has been studied actively as an auxiliary method of automatic speech recognition(ASR) in noisy environments. However, almost of research results were obtained based on the database constructed in indoor condition. So, we dont know how developed lip-reading algorithms are robust to dynamic variation of image. Currently we have developed a lip-reading system based on image-transform based algorithm. This system recognize 22 words and this word recognizer achieves word recognition of up to 53.54%. In this paper we present how stable the lip-reading system is in environmental variance and what the main variant factors are about dropping off in word-recognition performance. For studying lip-reading robustness we consider spatial valiance (translation, rotation, scaling) and illumination variance. Two kinds of test data are used. One Is the simulated lip image database and the other is real dynamic database captured in car environment. As a result of our experiment, we show that the spatial variance is one of degradations factors of lip reading performance. But the most important factor of degradation is not the spatial variance. The illumination variances make severe reduction of recognition rates as much as 70%. In conclusion, robust lip reading algorithms against illumination variances should be developed for using lip reading as a complementary method of ASR.

Visual Voice Activity Detection and Adaptive Threshold Estimation for Speech Recognition (음성인식기 성능 향상을 위한 영상기반 음성구간 검출 및 적응적 문턱값 추정)

  • Song, Taeyup;Lee, Kyungsun;Kim, Sung Soo;Lee, Jae-Won;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.321-327
    • /
    • 2015
  • In this paper, we propose an algorithm for achieving robust Visual Voice Activity Detection (VVAD) for enhanced speech recognition. In conventional VVAD algorithms, the motion of lip region is found by applying an optical flow or Chaos inspired measures for detecting visual speech frames. The optical flow-based VVAD is difficult to be adopted to driving scenarios due to its computational complexity. While invariant to illumination changes, Chaos theory based VVAD method is sensitive to motion translations caused by driver's head movements. The proposed Local Variance Histogram (LVH) is robust to the pixel intensity changes from both illumination change and translation change. Hence, for improved performance in environmental changes, we adopt the novel threshold estimation using total variance change. In the experimental results, the proposed VVAD algorithm achieves robustness in various driving situations.

Hand Raising Pose Detection in the Images of a Single Camera for Mobile Robot (주행 로봇을 위한 단일 카메라 영상에서 손든 자세 검출 알고리즘)

  • Kwon, Gi-Il
    • The Journal of Korea Robotics Society
    • /
    • v.10 no.4
    • /
    • pp.223-229
    • /
    • 2015
  • This paper proposes a novel method for detection of hand raising poses from images acquired from a single camera attached to a mobile robot that navigates unknown dynamic environments. Due to unconstrained illumination, a high level of variance in human appearances and unpredictable backgrounds, detecting hand raising gestures from an image acquired from a camera attached to a mobile robot is very challenging. The proposed method first detects faces to determine the region of interest (ROI), and in this ROI, we detect hands by using a HOG-based hand detector. By using the color distribution of the face region, we evaluate each candidate in the detected hand region. To deal with cases of failure in face detection, we also use a HOG-based hand raising pose detector. Unlike other hand raising pose detector systems, we evaluate our algorithm with images acquired from the camera and images obtained from the Internet that contain unknown backgrounds and unconstrained illumination. The level of variance in hand raising poses in these images is very high. Our experiment results show that the proposed method robustly detects hand raising poses in complex backgrounds and unknown lighting conditions.

Face Detection based on Pupil Color Distribution Maps with the Frequency under the Illumination Variance (빈도수를 고려한 눈동자색 분포맵에 기반한 조명 변화에 강건한 얼굴 검출 방법)

  • Cho, Han-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.225-232
    • /
    • 2009
  • In this paper, a new face detection method based on pupil color distribution maps with the frequency under the illumination variance is proposed. Face-like regions are first extracted by applying skin color distribution maps to a color image and then, they are reduced by using the standard deviation of chrominance components. In order to search for eye candidates effectively, the proposed method extracts eye-like regions from face-like regions by using pupil color distribution maps. Furthermore, the proposed method is able to detect eyes very well by segmenting the eye-like regions, based on a lighting compensation technique and a segmentation algorithm even though face regions are changed into dark-tone due to varying illumination conditions. Eye candidates are then detected by means of template matching method. Finally, face regions are detected by using the evaluation values of two eye candidates and a mouth. Experimental results show that the proposed method can achieve a high performance.

  • PDF

Normalized Region Extraction of Facial Features by Using Hue-Based Attention Operator (색상기반 주목연산자를 이용한 정규화된 얼굴요소영역 추출)

  • 정의정;김종화;전준형;최흥문
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.6C
    • /
    • pp.815-823
    • /
    • 2004
  • A hue-based attention operator and a combinational integral projection function(CIPF) are proposed to extract the normalized regions of face and facial features robustly against illumination variation. The face candidate regions are efficiently detected by using skin color filter, and the eyes are located accurately nil robustly against illumination variation by applying the proposed hue- and symmetry-based attention operator to the face candidate regions. And the faces are confirmed by verifying the eyes with the color-based eye variance filter. The proposed CIPF, which combines the weighted hue and intensity, is applied to detect the accurate vertical locations of the eyebrows and the mouth under illumination variations and the existence of mustache. The global face and its local feature regions are exactly located and normalized based on these accurate geometrical information. Experimental results on the AR face database[8] show that the proposed eye detection method yields better detection rate by about 39.3% than the conventional gray GST-based method. As a result, the normalized facial features can be extracted robustly and consistently based on the exact eye location under illumination variations.

A study on Robust Feature Image for Texture Classification and Detection (텍스쳐 분류 및 검출을 위한 강인한 특징이미지에 관한 연구)

  • Kim, Young-Sub;Ahn, Jong-Young;Kim, Sang-Bum;Hur, Kang-In
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.10 no.5
    • /
    • pp.133-138
    • /
    • 2010
  • In this paper, we make up a feature image including spatial properties and statistical properties on image, and format covariance matrices using region variance magnitudes. By using it to texture classification, this paper puts a proposal for tough texture classification way to illumination, noise and rotation. Also we offer a way to minimalize performance time of texture classification using integral image expressing middle image for fast calculation of region sum. To estimate performance evaluation of proposed way, this paper use a Brodatz texture image, and so conduct a noise addition and histogram specification and create rotation image. And then we conduct an experiment and get better performance over 96%.

Feature Variance and Adaptive classifier for Efficient Face Recognition (효과적인 얼굴 인식을 위한 특징 분포 및 적응적 인식기)

  • Dawadi, Pankaj Raj;Nam, Mi Young;Rhee, Phill Kyu
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.11a
    • /
    • pp.34-37
    • /
    • 2007
  • Face recognition is still a challenging problem in pattern recognition field which is affected by different factors such as facial expression, illumination, pose etc. The facial feature such as eyes, nose, and mouth constitute a complete face. Mouth feature of face is under the undesirable effect of facial expression as many factors contribute the low performance. We proposed a new approach for face recognition under facial expression applying two cascaded classifiers to improve recognition rate. All facial expression images are treated by general purpose classifier at first stage. All rejected images (applying threshold) are used for adaptation using GA for improvement in recognition rate. We apply Gabor Wavelet as a general classifier and Gabor wavelet with Genetic Algorithm for adaptation under expression variance to solve this issue. We have designed, implemented and demonstrated our proposed approach addressing this issue. FERET face image dataset have been chosen for training and testing and we have achieved a very good success.

  • PDF