• Title/Summary/Keyword: image-based lighting

Search Result 234, Processing Time 0.022 seconds

Skin Segmentation Using YUV and RGB Color Spaces

  • Al-Tairi, Zaher Hamid;Rahmat, Rahmita Wirza;Saripan, M. Iqbal;Sulaiman, Puteri Suhaiza
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.283-299
    • /
    • 2014
  • Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.

JND based Illumination and Color Restoration Using Edge-preserving Filter (JND와 경계 보호 평탄화 필터를 이용한 휘도 및 색상 복원)

  • Han, Hee-Chul;Sohn, Kwan-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.132-145
    • /
    • 2009
  • We present the framework for JND based Illumination and Color Restoration Using Edge-preserving filter for restoring distorted images taken under the arbitrary lighting conditions. The proposed method is effective for appropriate illumination compensation, vivid color restoration, artifacts suppression, automatic parameter estimation, and low computation cost for HW implementation. We show the efficiency of the mean shift filter and sigma filter for illumination compensation with small spread parameter while considering the processing time and removing the artifacts such as HALO and noise amplification. The suggested CRF (color restoration filter) can restore the natural color and correct color distortion artifact more perceptually compared with current solutions. For the automatic processing, the image statistics analysis finds suitable parameter using JND and all constants are pre-defined. We also introduce the ROI-based parameter estimation dealing with small shadow area against spacious well-exposed background in an image for the touch-screen camera. The object evaluation is performed by CMC, CIEde2000, PSNR, SSIM, and 3D CIELAB gamut with state-of-the-art research and existing commercial solutions.

Deep Learning-based Rice Seed Segmentation for Phynotyping (표현체 연구를 위한 심화학습 기반 벼 종자 분할)

  • Jeong, Yu Seok;Lee, Hong Ro;Baek, Jeong Ho;Kim, Kyung Hwan;Chung, Young Suk;Lee, Chang Woo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.25 no.5
    • /
    • pp.23-29
    • /
    • 2020
  • The National Institute of Agricultural Sciences of the Rural Developement Administration (NAS, RDA) is conducting various studies on various crops, such as monitoring the cultivation environment and analyzing harvested seeds for high-throughput phenotyping. In this paper, we propose a deep learning-based rice seed segmentation method to analyze the seeds of various crops owned by the NAS. Using Mask-RCNN deep learning model, we perform the rice seed segmentation from manually taken images under specific environment (constant lighting, white background) for analyzing the seed characteristics. For this purpose, we perform the parameter tuning process of the Mask-RCNN model. By the proposed method, the results of the test on seed object detection showed that the accuracy was 82% for rice stem image and 97% for rice grain image, respectively. As a future study, we are planning to researches of more reliable seeds extraction from cluttered seed images by a deep learning-based approach and selection of high-throughput phenotype through precise data analysis such as length, width, and thickness from the detected seed objects.

Divide and Conquer Strategy for CNN Model in Facial Emotion Recognition based on Thermal Images (얼굴 열화상 기반 감정인식을 위한 CNN 학습전략)

  • Lee, Donghwan;Yoo, Jang-Hee
    • Journal of Software Assessment and Valuation
    • /
    • v.17 no.2
    • /
    • pp.1-10
    • /
    • 2021
  • The ability to recognize human emotions by computer vision is a very important task, with many potential applications. Therefore the demand for emotion recognition using not only RGB images but also thermal images is increasing. Compared to RGB images, thermal images has the advantage of being less affected by lighting conditions but require a more sophisticated recognition method with low-resolution sources. In this paper, we propose a Divide and Conquer-based CNN training strategy to improve the performance of facial thermal image-based emotion recognition. The proposed method first trains to classify difficult-to-classify similar emotion classes into the same class group by confusion matrix analysis and then divides and solves the problem so that the emotion group classified into the same class group is recognized again as actual emotions. In experiments, the proposed method has improved accuracy in all the tests than when recognizing all the presented emotions with a single CNN model.

A Study on Multi-modal Near-IR Face and Iris Recognition on Mobile Phones (휴대폰 환경에서의 근적외선 얼굴 및 홍채 다중 인식 연구)

  • Park, Kang-Ryoung;Han, Song-Yi;Kang, Byung-Jun;Park, So-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • As the security requirements of mobile phones have been increasing, there have been extensive researches using one biometric feature (e.g., an iris, a fingerprint, or a face image) for authentication. Due to the limitation of uni-modal biometrics, we propose a method that combines face and iris images in order to improve accuracy in mobile environments. This paper presents four advantages and contributions over previous research. First, in order to capture both face and iris image at fast speed and simultaneously, we use a built-in conventional mega pixel camera in mobile phone, which is revised to capture the NIR (Near-InfraRed) face and iris image. Second, in order to increase the authentication accuracy of face and iris, we propose a score level fusion method based on SVM (Support Vector Machine). Third, to reduce the classification complexities of SVM and intra-variation of face and iris data, we normalize the input face and iris data, respectively. For face, a NIR illuminator and NIR passing filter on camera are used to reduce the illumination variance caused by environmental visible lighting and the consequent saturated region in face by the NIR illuminator is normalized by low processing logarithmic algorithm considering mobile phone. For iris, image transform into polar coordinate and iris code shifting are used for obtaining robust identification accuracy irrespective of image capturing condition. Fourth, to increase the processing speed on mobile phone, we use integer based face and iris authentication algorithms. Experimental results were tested with face and iris images by mega-pixel camera of mobile phone. It showed that the authentication accuracy using SVM was better than those of uni-modal (face or iris), SUM, MAX, NIN and weighted SUM rules.

Fast Light Source Estimation Technique for Effective Synthesis of Mixed Reality Scene (효과적인 혼합현실 장면 생성을 위한 고속의 광원 추정 기법)

  • Shin, Seungmi;Seo, Woong;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.89-99
    • /
    • 2016
  • One of the fundamental elements in developing mixed reality applications is to effectively analyze and apply the environmental lighting information to image synthesis. In particular, interactive applications require to process dynamically varying lighting sources in real-time, reflecting them properly in rendering results. Previous related works are not often appropriate for this because they are usually designed to synthesize photorealistic images, generating too many, often exponentially increasing, light sources or having too heavy a computational complexity. In this paper, we present a fast light source estimation technique that aims to search for primary light sources on the fly from a sequence of video images taken by a camera equipped with a fisheye lens. In contrast to previous methods, our technique can adust the number of found light sources approximately to the size that a user specifies. Thus, it can be effectively used in Phong-illumination-model-based direct illumination or soft shadow generation through light sampling over area lights.

Real-time Video Based Relighting Technology for Moving Object (움직이는 오브젝트를 위한 실시간 비디오기반 재조명 기술 -비주얼 헐 오브젝트를 이용한 실시간 영상기반 재조명 기술)

  • Ryu, Sae-Woon;Lee, Sang-Hwa;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.433-438
    • /
    • 2008
  • 본 논문은 비주얼 헐 오브젝트를 이용한 움직이는 오브젝트에 대한 실시간 영상기반 라이팅 기술을 제안한다. 본 논문에서는 특히 서로 다른 공간상의 조명 환경을 일치시키는 기술에 중점을 두고, 실시간으로 움직이는 오브젝트의 실시간 비디오 기반 재조명 기술로서 3가지 핵심 내용을 소개한다. 첫째는 비주얼 헐 데이터를 기반으로 기존에 벡터의 외적을 사용하던 방법을 개선하여 수식을 근사화시켜 연산량을 줄여서 고속으로 노말 벡터를 추출하는 방법이고, 둘째는 사용자 주변 조명 환경 정보를 효과적으로 샘플링하여 라이팅에 사용하는 점광원의 개수를 줄였으며, 세 번째는 CPU와 GPU의 연산량을 분배하여 효과적으로 병렬 고속 연산이 가능하도록 하였다. 종래의 영상기반 라이팅 기술이 정지된 환경맵 영상을 사용하거나 정지된 객체를 라이팅하였던 연구를 한 반면에 본 논문은 실시간에서 라이팅을 구현하기 위한 기술로서 고속 라이팅 연산을 위한 방법을 제시하고 있다. 본 연구의 결과를 이용하면 영상기반 라이팅 연구의 실제적이고도 폭넓은 작용이 가능할 것으로 사료되며 고화질의 콘텐츠 양산에도 기여할 것으로 사료된다.

  • PDF

Design and Application of Vision Box Based on Embedded System (Embedded System 기반 Vision Box 설계와 적용)

  • Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.8
    • /
    • pp.1601-1607
    • /
    • 2009
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and automobile types recognition is one of them. There have been many research about algorithm of automobile types recognition. But have complex calculation processing. so they need long processing time. In this paper, we designed vision box based on embedded system. and suggested automobile types recognition system using the vision box. As a result of pretesting, this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting and angle, recognition is available but pattern score is lowered. Also, it is observed that the proposed system satisfy the criteria of processing time and recognition rate in industrial field.

A Study on LED Control System for Object Detecting based on Zigbee Network in BEMS (BEMS용 Zigbee 네트워크 기반 객체감지형 LED 조명 제어 시스템에 관한연구)

  • Ko, Kwangseok;Lee, JungHoon;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.8 no.2
    • /
    • pp.17-21
    • /
    • 2013
  • A building energy-saving have been increased worldwide interest. There is continuing research on IT technology for efficient management of BEMS. Recently, It is able to control of LED and to maximize energy savings to the development of LED lighting technology. We propose the security image processing system to improve efficiency and we implement the real-time status monitoring system to surveil the object in the building energy management system. In this paper, we proposed the system of LED control using Zigbee network for connect the server. User is able to control LED light and monitering by the desktop. We implemented LED light control software on the based of Real-time monitering and LED control. Also detect human body movement.

Design and Implementation of Vision Box Based on Embedded Platform (Embedded Platform 기반 Vision Box 설계 및 구현)

  • Kim, Pan-Kyu;Lee, Jong-Hyeok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.1
    • /
    • pp.191-197
    • /
    • 2007
  • Vision system is an object recognition system analyzing image information captured through camera. Vision system can be applied to various fields, and vehicle recognition is ole of them. There have been many proposals about algorithm of vehicle recognition. But have complex calculation processing. So they need long processing time and sometimes they make problems. In this research we suggested vehicle type recognition system using vision bpx based on embedded platform. As a result of testing this system achieves 100% rate of recognition at the optimal condition. But when condition is changed by lighting, noise and angle, rate of recognition is decreased as pattern score is lowered and recognition speed is slowed.