• Title/Summary/Keyword: Illumination systems

Search Result 446, Processing Time 0.029 seconds

Effect of Color and Color Temperature on the Attention in the Residential Space by the Analysis of EEG and ECG (뇌파와 심전도 분석을 통한 색채와 색온도가 주거공간에서의 집중도에 미치는 영향)

  • Kim, Young Jung;Ji, Doo Hwan;Ryu, Young Jae;Kim, Sung Hyun;Seo, Sang Hyeok;Kwak, Seung Hyun;Kang, Jin Kyu;Min, Byung Chan
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.40 no.1
    • /
    • pp.124-130
    • /
    • 2017
  • This study is aimed to find out whether there is difference in the physiological change of a human body according to the illumination and color of interior space or not and to specify the effect of the condition of illumination and color, respectively on the attention. In order to do so, White and Green were selected for colors and 4,000k, 5,000k, and 6000k were done for color temperature, and then attention was identified. Examining the results, the more color temperature increased, the more attention improved (P < 0.05), and in the case of EEG, ${\alpha}$ wave decreased while performing the task of attention (P < 0.01), and ${\beta}$ wave decreased more in Green than White in color condition, and it increased more in 4,000k than 5,000k and 6,000k (p < 0.05) in color temperature condition. To sum up, color condition didn't contribute to the attention much, in the case of color temperature, when it is 6,000k, it is judged that it helped to improve attention. It is considered that relaxation contributed to improving attention, as ${\beta}$ wave and sympathetic nerve decreased in 6,000k (p < 0.05). It is judged that the relaxation of tensions which happened due to a beta wave and the reduction of sympathetic nervous system activity in 6,000k, a condition of high color temperature, contributed to the improvement of concentration. In further researches, it is intended that a test will be conducted for the subjects of different ages, and the correlation between color temperature and color stimulation and the influence of them on human body would be observed in subdivided, various test conditions through various color temperature and color stimulation.

Planetary Long-Range Deep 2D Global Localization Using Generative Adversarial Network (생성적 적대 신경망을 이용한 행성의 장거리 2차원 깊이 광역 위치 추정 방법)

  • Ahmed, M.Naguib;Nguyen, Tuan Anh;Islam, Naeem Ul;Kim, Jaewoong;Lee, Sukhan
    • The Journal of Korea Robotics Society
    • /
    • v.13 no.1
    • /
    • pp.26-30
    • /
    • 2018
  • Planetary global localization is necessary for long-range rover missions in which communication with command center operator is throttled due to the long distance. There has been number of researches that address this problem by exploiting and matching rover surroundings with global digital elevation maps (DEM). Using conventional methods for matching, however, is challenging due to artifacts in both DEM rendered images, and/or rover 2D images caused by DEM low resolution, rover image illumination variations and small terrain features. In this work, we use train CNN discriminator to match rover 2D image with DEM rendered images using conditional Generative Adversarial Network architecture (cGAN). We then use this discriminator to search an uncertainty bound given by visual odometry (VO) error bound to estimate rover optimal location and orientation. We demonstrate our network capability to learn to translate rover image into DEM simulated image and match them using Devon Island dataset. The experimental results show that our proposed approach achieves ~74% mean average precision.

Gastric Cancer Extraction of Electronic Endoscopic Images using IHb and HSI Color Information (IHb와 HSI 색상 정보를 이용한 전자 내시경의 위암 추출)

  • Kim, Kwang-Baek;Lim, Eun-Kyung;Kim, Gwang-Ha
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.2
    • /
    • pp.265-269
    • /
    • 2007
  • In this paper, we propose an automatic extraction method of gastric cancer region from electronic endoscopic images. We use the brightness and saturation of HSI in removing noises by illumination and shadows by the crookedness occurring in the endoscopic process. We partition the image into several areas with similar pigments of hemoglobin using IHb. The candidate areas for gastric cancer are defined as the areas that have high hemoglobin pigments and high value in every channel of RGB. Then the morphological characteristics of gastric cancer are used to decide the target region. In experiment, our method is sufficiently accurate in that it correctly identifies most cases (18 out of 20 cases) from real electronic endoscopic images, obtained by expert endoscopists.

Camera-based Music Score Recognition Using Inverse Filter

  • Nguyen, Tam;Kim, SooHyung;Yang, HyungJeong;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.10 no.4
    • /
    • pp.11-17
    • /
    • 2014
  • The influence of acquisition environment on music score images captured by a camera has not yet been seriously examined. All existing Optical Music Recognition (OMR) systems attempt to recognize music score images captured by a scanner under ideal conditions. Therefore, when such systems process images under the influence of distortion, different viewpoints or suboptimal illumination effects, the performance, in terms of recognition accuracy and processing time, is unacceptable for deployment in practice. In this paper, a novel, lightweight but effective approach for dealing with the issues caused by camera based music scores is proposed. Based on the staff line information, musical rules, run length code, and projection, all regions of interest are determined. Templates created from inverse filter are then used to recognize the music symbols. Therefore, all fragmentation and deformation problems, as well as missed recognition, can be overcome using the developed method. The system was evaluated on a dataset consisting of real images captured by a smartphone. The achieved recognition rate and processing time were relatively competitive with state of the art works. In addition, the system was designed to be lightweight compared with the other approaches, which mostly adopted machine learning algorithms, to allow further deployment on portable devices with limited computing resources.

Model-based Curved Lane Detection using Geometric Relation between Camera and Road Plane (카메라와 도로평면의 기하관계를 이용한 모델 기반 곡선 차선 검출)

  • Jang, Ho-Jin;Baek, Seung-Hae;Park, Soon-Yong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.2
    • /
    • pp.130-136
    • /
    • 2015
  • In this paper, we propose a robust curved lane marking detection method. Several lane detection methods have been proposed, however most of them have considered only straight lanes. Compared to the number of straight lane detection researches, less number of curved-lane detection researches has been investigated. This paper proposes a new curved lane detection and tracking method which is robust to various illumination conditions. First, the proposed methods detect straight lanes using a robust road feature image. Using the geometric relation between a vehicle camera and the road plane, several circle models are generated, which are later projected as curved lane models on the camera images. On the top of the detected straight lanes, the curved lane models are superimposed to match with the road feature image. Then, each curve model is voted based on the distribution of road features. Finally, the curve model with highest votes is selected as the true curve model. The performance and efficiency of the proposed algorithm are shown in experimental results.

Auto Parts Visual Inspection in Severe Changes in the Lighting Environment (조명의 변화가 심한 환경에서 자동차 부품 유무 비전검사 방법)

  • Kim, Giseok;Park, Yo Han;Park, Jong-Seop;Cho, Jae-Soo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.12
    • /
    • pp.1109-1114
    • /
    • 2015
  • This paper presents an improved learning-based visual inspection method for auto parts inspection in severe lighting changes. Automobile sunroof frames are produced automatically by robots in most production lines. In the sunroof frame manufacturing process, there is a quality problem with some parts such as volts are missed. Instead of manual sampling inspection using some mechanical jig instruments, a learning-based machine vision system was proposed in the previous research[1]. But, in applying the actual sunroof frame production process, the inspection accuracy of the proposed vision system is much lowered because of severe illumination changes. In order to overcome this capricious environment, some selective feature vectors and cascade classifiers are used for each auto parts. And we are able to improve the inspection accuracy through the re-learning concept for the misclassified data. The effectiveness of the proposed visual inspection method is verified through sufficient experiments in a real sunroof production line.

AdaBoost-based Real-Time Face Detection & Tracking System (AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발)

  • Kim, Jeong-Hyun;Kim, Jin-Young;Hong, Young-Jin;Kwon, Jang-Woo;Kang, Dong-Joong;Lho, Tae-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

2D Face Image Recognition and Authentication Based on Data Fusion (데이터 퓨전을 이용한 얼굴영상 인식 및 인증에 관한 연구)

  • 박성원;권지웅;최진영
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.11 no.4
    • /
    • pp.302-306
    • /
    • 2001
  • Because face Images have many variations(expression, illumination, orientation of face, etc), there has been no popular method which has high recognition rate. To solve this difficulty, data fusion that fuses various information has been studied. But previous research for data fusion fused additional biological informationUingerplint, voice, del with face image. In this paper, cooperative results from several face image recognition modules are fused without using additional biological information. To fuse results from individual face image recognition modules, we use re-defined mass function based on Dempster-Shafer s fusion theory.Experimental results from fusing several face recognition modules are presented, to show that proposed fusion model has better performance than single face recognition module without using additional biological information.

  • PDF

Three-dimensional Head Tracking Using Adaptive Local Binary Pattern in Depth Images

  • Kim, Joongrock;Yoon, Changyong
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.131-139
    • /
    • 2016
  • Recognition of human motions has become a main area of computer vision due to its potential human-computer interface (HCI) and surveillance. Among those existing recognition techniques for human motions, head detection and tracking is basis for all human motion recognitions. Various approaches have been tried to detect and trace the position of human head in two-dimensional (2D) images precisely. However, it is still a challenging problem because the human appearance is too changeable by pose, and images are affected by illumination change. To enhance the performance of head detection and tracking, the real-time three-dimensional (3D) data acquisition sensors such as time-of-flight and Kinect depth sensor are recently used. In this paper, we propose an effective feature extraction method, called adaptive local binary pattern (ALBP), for depth image based applications. Contrasting to well-known conventional local binary pattern (LBP), the proposed ALBP cannot only extract shape information without texture in depth images, but also is invariant distance change in range images. We apply the proposed ALBP for head detection and tracking in depth images to show its effectiveness and its usefulness.

Solder Joint Inspection Using a Neural Network and Fuzzy Rule-Based Classification Method (신경회로망과 퍼지 규칙을 이용한 인쇄회로 기판상의 납땜 형상검사)

  • Ko, Kuk-Won;Cho, Hyung-Suck;Kim, Jong-Hyeong;Kim, Sung-Kwon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.8
    • /
    • pp.710-718
    • /
    • 2000
  • In this paper we described an approach to automation of visual inspection of solder joint defects of SMC(Surface Mounted Components) on PCBs(Printed Circuit Board) by using neural network and fuzzy rule-based classification method. Inherently the surface of the solder joints is curved tiny and specular reflective it induces difficulty of taking good image of the solder joints. And the shape of the solder joints tends to greatly vary with the soldering condition and the shapes are not identical to each other even though the solder joints belong to a set of the same soldering quality. This problem makes it difficult to classify the solder joints according to their qualities. Neural network and fuzzy rule-based classification method is proposed to effi-ciently make human-like classification criteria of the solder joint shapes. The performance of the proposed approach is tested on numerous samples of commercial computer PCB boards and compared with the results of the human inspector performance and the conventional Kohonen network.

  • PDF