• Title/Summary/Keyword: 눈 영역 추출

Search Result 227, Processing Time 0.02 seconds

Gender Classification using Non-Negative Matrix Analysis with Sparse Logistic Regression (Sparse Logistic Regression 기반 비음수 행렬 분석을 통한 성별 인식)

  • Hur, Dong-Cheol;Wallraven, Christian;Lee, Seong-Whan
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06c
    • /
    • pp.373-376
    • /
    • 2011
  • 얼굴 영상에서 구성요소(눈썹, 눈, 코, 입 등)의 존재에 따라 보는 사람의 얼굴 인식 정확도는 큰 영향을 받는다. 이는 인간의 뇌에서 얼굴 정보를 처리하는 과정은 얼굴 전체 영역 뿐만 아니라, 부분적인 얼굴 구성요소의 특징들도 고려함을 말한다. 비음수 행렬 분해(NMF: Non-negative Matrix Factorization)는 이러한 얼굴 영역에서 부분적인 특징들을 잘 표현하는 기저영상들을 찾아내는데 효과적임을 보여주었으나, 각 기저영상들의 중요도는 알 수 없었다. 본 논문에서는 NMF로 찾아진 기저영상들에 대응되는 인코딩 정보를 SLR(Sparse Logistic Regression)을 이용하여 성별 인식에 중요한 부분 영역들을 찾고자 한다. 실험에서는 주성분분석(PCA)과 비교를 통해 NMF를 이용한 기저영상 및 특징 벡터 추출이 좋은 성능을 보여주고, 대표적 이진 분류 알고리즘인 SVM(Support Vector Machine)과 비교를 통해 SLR을 이용한 특징 벡터 선택이 나은 성능을 보여줌을 확인하였다. 또한 SLR로 확인된 각 기저영상에 대한 가중치를 통하여 인식 과정에서 중요한 얼굴 영역들을 확인할 수 있다.

Adult Image Detection Using Skin Color and Multiple Features (피부색상과 복합 특징을 이용한 유해영상 인식)

  • Jang, Seok-Woo;Choi, Hyung-Il;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.12
    • /
    • pp.27-35
    • /
    • 2010
  • Extracting skin color is significant in adult image detection. However, conventional methods still have essential problems in extracting skin color. That is, colors of human skins are basically not the same because of individual skin difference or difference races. Moreover, skin regions of images may not have identical color due to makeup, different cameras used, etc. Therefore, most of the existing methods use predefined skin color models. To resolve these problems, in this paper, we propose a new adult image detection method that robustly segments skin areas with an input image-adapted skin color distribution model, and verifies if the segmented skin regions contain naked bodies by fusing several representative features through a neural network scheme. Experimental results show that our method outperforms others through various experiments. We expect that the suggested method will be useful in many applications such as face detection and objectionable image filtering.

Estimation of 3D Rotation Information of Animation Character Face (애니메이션 캐릭터 얼굴의 3차원 회전정보 측정)

  • Jang, Seok-Woo;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.16 no.8
    • /
    • pp.49-56
    • /
    • 2011
  • Recently, animation contents has become extensively available along with the development of cultural industry. In this paper, we propose a method to analyze a face of animation character and extract 3D rotational information of the face. The suggested method first generates a dominant color model of a face by learning the face image of animation character. Our system then detects the face and its components with the model, and establishes two coordinate systems: base coordinate system and target coordinate system. Our system estimates three dimensional rotational information of the animation character face using the geometric relationship of the two coordinate systems. Finally, in order to visually represent the extracted 3D information, a 3D face model in which the rotation information is reflected is displayed. In experiments, we show that our method can extract 3D rotation information of a character face reasonably.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.

ID Face Detection Robust to Color Degradation and Partial Veiling (색열화 및 부분 은폐에 강인한 ID얼굴 검지)

  • Kim Dae Sung;Kim Nam Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.1
    • /
    • pp.1-12
    • /
    • 2004
  • In this paper, we present an identificable face (n face) detection method robust to color degradation and partial veiling. This method is composed of three parts: segmentation of face candidate regions, extraction of face candidate windows, and decision of veiling. In the segmentation of face candidate regions, face candidate regions are detected by finding skin color regions and facial components such as eyes, a nose and a mouth, which may have degraded colors, from an input image. In the extraction of face candidate windows, face candidate windows which have high potentials of faces are extracted in face candidate regions. In the decision of veiling, using an eigenface method, a face candidate window whose similarity with eigenfaces is maximum is determined and whether facial components of the face candidate window are veiled or not is determined in the similar way. Experimental results show that the proposed method yields better the detection rate by about $11.4\%$ in test DB containing color-degraded faces and veiled ones than a conventional method without considering color degradation and partial veiling.

Personal Identification Using One Dimension Iris Signals (일차원 홍채 신호를 이용한 개인 식별)

  • Park, Yeong-Gyu;No, Seung-In;Yun, Hun-Ju;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.70-76
    • /
    • 2002
  • In this paper, we proposed a personal identification algorithm using the iris region which has discriminant features. First, we acquired the eye image with the black and white CCD camera and extracted the iris region by using a circular edge detector which minimizes the search space for real center and radius of the iris. And then, we localized the iris region into several circles and extracted the features by filtering signals on the perimeters of circles with one dimensional Gabor filter We identified a person by comparing ,correlation values of input signals with the registered signals. We also decided threshold value minimizing average error rate for FRR(Type I)error rate and FAR(Type II)error rate. Experimental results show that proposed algorithm has average error rate less than 5.2%.

Real Time Lip Reading System Implementation in Embedded Environment (임베디드 환경에서의 실시간 립리딩 시스템 구현)

  • Kim, Young-Un;Kang, Sun-Kyung;Jung, Sung-Tae
    • The KIPS Transactions:PartB
    • /
    • v.17B no.3
    • /
    • pp.227-232
    • /
    • 2010
  • This paper proposes the real time lip reading method in the embedded environment. The embedded environment has the limited sources to use compared to existing PC environment, so it is hard to drive the lip reading system with existing PC environment in the embedded environment in real time. To solve the problem, this paper suggests detection methods of lip region, feature extraction of lips, and awareness methods of phonetic words suitable to the embedded environment. First, it detects the face region by using face color information to find out the accurate lip region and then detects the exact lip region by finding the position of both eyes from the detected face region and using the geometric relations. To detect strong features of lighting variables by the changing surroundings, histogram matching, lip folding, and RASTA filter were applied, and the properties extracted by using the principal component analysis(PCA) were used for recognition. The result of the test has shown the processing speed between 1.15 and 2.35 sec. according to vocalizations in the embedded environment of CPU 806Mhz, RAM 128MB specifications and obtained 77% of recognition as 139 among 180 words were recognized.

An Extracting Text Area Using Adaptive Edge Enhanced MSER in Real World Image (실세계 영상에서 적응적 에지 강화 기반의 MSER을 이용한 글자 영역 추출 기법)

  • Park, Youngmok;Park, Sunhwa;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.17 no.4
    • /
    • pp.219-226
    • /
    • 2016
  • In our general life, what we recognize information with our human eyes and use it is diverse and massive. But even the current technologies improved by artificial intelligence are exorbitantly deficient comparing to human visual processing ability. Nevertheless, many researchers are trying to get information in everyday life, especially concentrate effort on recognizing information consisted of text. In the fields of recognizing text, to extract the text from the general document is used in some information processing fields, but to extract and recognize the text from real image is deficient too much yet. It is because the real images have many properties like color, size, orientation and something in common. In this paper, we applies an adaptive edge enhanced MSER(Maximally Stable Extremal Regions) to extract the text area in those diverse environments and the scene text, and show that the proposed method is a comparatively nice method with experiments.

Eyelid Detection Algorithm Based on Parabolic Hough Transform for Iris Recognition (홍채 인식을 위한 포물 허프 변환 기반 눈꺼풀 영역 검출 알고리즘)

  • Jang, Young-Kyoon;Kang, Byung-Jun;Park, Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.94-104
    • /
    • 2007
  • Iris recognition is biometric technology which uses a unique iris pattern of user in order to identify person. In the captured iris image by conventional iris recognition camera, it is often the case with eyelid occlusion, which covers iris information. The eyelids are unnecessary information that causes bad recognition performance, so this paper proposes robust algorithm in order to detect eyelid. This research has following three advantages compared to previous works. First, we remove the detected eyelash and specular reflection by linear interpolation method because they act as noise factors when locating eyelid. Second, we detect the candidate points of eyelid by using mask in limited eyelid searching area, which is determined by searching the cross position of eyelid and the outer boundary of iris. And our proposed algorithm detects eyelid by using parabolic hough transform based on the detected candidate points. Third, there have been many researches to detect eyelid, but they did not consider the rotation of eyelid in an iris image. Whereas, we consider the rotation factor in parabolic hough transform to overcome such problem. We tested our algorithm with CASIA Database. As the experimental results, the detection accuracy were 90.82% and 96.47% in case of detecting upper and lower eyelid, respectively.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF