• Title/Summary/Keyword: Image Based Lighting

Search Result 236, Processing Time 0.029 seconds

Adaptive Model-based Multi-object Tracking Robust to Illumination Changes and Overlapping (조명변화와 곁침에 강건한 적응적 모델 기반 다중객체 추적)

  • Lee Kyoung-Mi;Lee Youn-Mi
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.5
    • /
    • pp.449-460
    • /
    • 2005
  • This paper proposes a method to track persons robustly in illumination changes and partial occlusions in color video frames acquired from a fixed camera. To solve a problem of changing appearance by illumination change, a time-independent intrinsic image is used to remove noises in an frame and is adaptively updated frame-by-frame. We use a hierarchical human model including body color information in order to track persons in occlusion. The tracked human model is recorded into a persons' list for some duration after the corresponding person's exit and is recovered from the list after her reentering. The proposed method was experimented in several indoor and outdoor scenario. This demonstrated the potential effectiveness of an adaptive model-base method that corrected distorted person's color information by lighting changes, and succeeded tracking of persons which was overlapped in a frame.

Design of Three-dimensional Face Recognition System Using Optimized PRBFNNs and PCA : Comparative Analysis of Evolutionary Algorithms (최적화된 PRBFNNs 패턴분류기와 PCA알고리즘을 이용한 3차원 얼굴인식 알고리즘 설계 : 진화 알고리즘의 비교 해석)

  • Oh, Sung-Kwun;Oh, Seung-Hun;Kim, Hyun-Ki
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.539-544
    • /
    • 2013
  • In this paper, we was designed three-dimensional face recognition algorithm using polynomial based RBFNNs and proposed method to calculate the recognition performance. In case of two-dimensional face recognition, the recognition performance is reduced by the external environment like facial pose and lighting. In order to compensate for these shortcomings, we perform face recognition by obtaining three-dimensional images. obtain face image using three-dimension scanner before the face recognition and obtain the front facial form using pose-compensation. And the depth value of the face is extracting using Point Signature method. The extracted data as high-dimensional data may cause problems in accompany the training and recognition. so use dimension reduction data using PCA algorithm. accompany parameter optimization using optimization algorithm for effective training. Each recognition performance confirm using PSO, DE, GA algorithm.

A Driver's Condition Warning System using Eye Aspect Ratio (눈 영상비를 이용한 운전자 상태 경고 시스템)

  • Shin, Moon-Chang;Lee, Won-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.2
    • /
    • pp.349-356
    • /
    • 2020
  • This paper introduces the implementation of a driver's condition warning system using eye aspect ratio to prevent a car accident. The proposed driver's condition warning system using eye aspect ratio consists of a camera, that is required to detect eyes, the Raspberrypie that processes information on eyes from the camera, buzzer and vibrator, that are required to warn the driver. In order to detect and recognize driver's eyes, the histogram of oriented gradients and face landmark estimation based on deep-learning are used. Initially the system calculates the eye aspect ratio of the driver from 6 coordinates around the eye and then gets each eye aspect ratio values when the eyes are opened and closed. These two different eye aspect ratio values are used to calculate the threshold value that is necessary to determine the eye state. Because the threshold value is adaptively determined according to the driver's eye aspect ratio, the system can use the optimal threshold value to determine the driver's condition. In addition, the system synthesizes an input image from the gray-scaled and LAB model images to operate in low lighting conditions.

Development of Smart Tote Bags with Marquage Techniques Using Optical Fiber and LEDs (광섬유와 LED를 활용한 마카쥬(marquage) 기법의 스마트 토트백 개발)

  • Park, Jinhee;Kim, Sang Jin;Kim, Jooyong
    • Journal of Fashion Business
    • /
    • v.25 no.1
    • /
    • pp.51-64
    • /
    • 2021
  • The purpose of this study was to develop smart bags that combining fashion-specific trends and smart information technologies such as light-emitting diodes(LED) and optic fibers by grafting marquage techniques that have recently become popular as part of eco-fashion. We applied e-textiles by designing leather tote bags that could show off LED luminescence. A total of two tote bags, a white-colored peacock design and a black-colored paisley design, divided the LED's light-emitting method into two types, incremental lighting and random light-emission to suit each design, and the locations of the optical fibers were also reversed depending upon the design. The production of circuits for the LEDs and optical fibers was based on the design, and a flexible conductive fabric was laser-cut instead of wire line and attached to the circuit-line location. A separate connector was underwent three-dimensional(3D)-modeling and was connected to high-luminosity LEDs and optic fiber bundles. The optical fiber logo part expressed a subtle image using a white-colored LED, which did not offset the LED's sharp luminous effects, suggesting that using LEDs with fiber optics allowed for the expression of each in harmony without being heterogeneous. Overall, the LEDs and fiber optic fabric were well-harmonized in the fashion bag using marquage techniques, and there was no sense of it being a mechanical device. Also, the circuit part was made of conductive fabric, which is an e-textile product that feels the same as a thin, flexible fabric. The study confirmed that the bag was developed as a smart wearable product that could be used in everyday life.

A study on liquid crystal-based electrical polarization control technology for polarized image monitoring device (편광 영상감시 장치를 위한 액정 기반 전기적 편광 조절 기술 연구)

  • Ahn, Hyeon-Sik;Lim, Seong-Min;Jang, Eun-Jeong;Choi, Yoonseuk
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.416-421
    • /
    • 2022
  • In this study, we present a fully automated system that combines camera technology with liquid crystal technology to create a polarization camera capable of detecting the partial linear polarization of light reflected from an object. The use of twisted nematic (TN) liquid crystals that electro-optically modulate the polarization plane of light eliminates the need to mechanically rotate the polarizing filter in front of the camera lens. Images obtained using these techniques are imaged by computer software. In addition, liquid crystal panels have been produced in a square shape, but many camera lenses are usually round, and lighting or other driving units are installed around the lens, so space is optimized through the application of a circular liquid crystal display. Through the development of this technology, an electrically switchable and space-optimized liquid crystal polarizer is developed.

Improvement of Multiple-sensor based Frost Observation System (MFOS v2) (다중센서 기반 서리관측 시스템의 개선: MFOS v2)

  • Suhyun Kim;Seung-Jae Lee;Kyu Rang Kim
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.3
    • /
    • pp.226-235
    • /
    • 2023
  • This study aimed to supplement the shortcomings of the Multiple-sensor-based Frost Observation System (MFOS). The developed frost observation system is an improvement of the existing system. Based on the leaf wetness sensor (LWS), it not only detects frost but also functions to predict surface temperature, which is a major factor in frost occurrence. With the existing observation system, 1) it is difficult to observe ice (frost) formation on the surface when capturing an image of the LWS with an RGB camera because the surface of the sensor reflects most visible light, 2) images captured using the RGB camera before and after sunrise are dark, and 3) the thermal infrared camera only shows the relative high and low temperature. To identify the ice (frost) generated on the surface of the LWS, a LWS that was painted black and three sheets of glass at the same height to be used as an auxiliary tool to check the occurrence of ice (frost) were installed. For RGB camera shooting before and after sunrise, synchronous LED lighting was installed so the power turns on/off according to the camera shooting time. The existing thermal infrared camera, which could only assess the relative temperature (high or low), was improved to extract the temperature value per pixel, and a comparison with the surface temperature sensor installed by the National Institute of Meteorological Sciences (NIMS) was performed to verify its accuracy. As a result of installing and operating the MFOS v2, which reflects these improvements, the accuracy and efficiency of automatic frost observation were demonstrated to be improved, and the usefulness of the data as input data for the frost prediction model was enhanced.

The I-MCTBoost Classifier for Real-time Face Detection in Depth Image (깊이영상에서 실시간 얼굴 검출을 위한 I-MCTBoost)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.3
    • /
    • pp.25-35
    • /
    • 2014
  • This paper proposes a method of boosting-based classification for the purpose of real-time face detection. The proposed method uses depth images to ensure strong performance of face detection in response to changes in lighting and face size, and uses the depth difference feature to conduct learning and recognition through the I-MCTBoost classifier. I-MCTBoost performs recognition by connecting the strong classifiers that are constituted from weak classifiers. The learning process for the weak classifiers is as follows: first, depth difference features are generated, and eight of these features are combined to form the weak classifier, and each feature is expressed as a binary bit. Strong classifiers undergo learning through the process of repeatedly selecting a specified number of weak classifiers, and become capable of strong classification through a learning process in which the weight of the learning samples are renewed and learning data is added. This paper explains depth difference features and proposes a learning method for the weak classifiers and strong classifiers of I-MCTBoost. Lastly, the paper presents comparisons of the proposed classifiers and the classifiers using conventional MCT through qualitative and quantitative analyses to establish the feasibility and efficiency of the proposed classifiers.

3D Pointing for Effective Hand Mouse in Depth Image (깊이영상에서 효율적인 핸드 마우스를 위한 3D 포인팅)

  • Joo, Sung-Il;Weon, Sun-Hee;Choi, Hyung-Il
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.8
    • /
    • pp.35-44
    • /
    • 2014
  • This paper proposes a 3D pointing interface that is designed for the efficient application of a hand mouse. The proposed method uses depth images to secure high-quality results even in response to changes in lighting and environmental conditions and uses the normal vector of the palm of the hand to perform 3D pointing. First, the hand region is detected and tracked using the existing conventional method; based on the information thus obtained, the region of the palm is predicted and the region of interest is obtained. Once the region of interest has been identified, this region is approximated by the plane equation and the normal vector is extracted. Next, to ensure stable control, interpolation is performed using the extracted normal vector and the intersection point is detected. For stability and efficiency, the dynamic weight using the sigmoid function is applied to the above detected intersection point, and finally, this is converted into the 2D coordinate system. This paper explains the methods of detecting the region of interest and the direction vector and proposes a method of interpolating and applying the dynamic weight in order to stabilize control. Lastly, qualitative and quantitative analyses are performed on the proposed 3D pointing method to verify its ability to deliver stable control.

Optimal Ambient Illumination Study for Soft-Copy Ultrasound Images (소프트 카피 초음파 이미지를 보기 위한 최적의 주변광 조도 연구)

  • An, Hyun;Lee, Hyo-Yeong
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.2
    • /
    • pp.209-216
    • /
    • 2019
  • The purpose of this study was to suggest the optimum ambient illumination level for proper visualization in image inspection and reading on CRT and LCD monitors used for ultrasound and reading. The evaluators were divided into 4 groups: 20 (Ultra-sonographer: 20 groups (4 groups: ultra-sonographer, 1-5 years, 5 ultra-sonographers, 6 to 10 years, 5 ultra-sonographers, 11 to 15 years, The subjects were 32 questions. The evaluation method was image evaluation of ultrasonic soft copy images for 30 seconds per 10, 25, 100Lux ambient illumination. The evaluation results were evaluated as 6 points (Normal = Definitely no lesion), 2 points = possibly not a lesion, 3 points = probably not a lesion, 4 points = possibly a lesion, 5 points = probably a lesion, 6 points = Definitely a lesion). In this study, the results of ROC analysis according to ambient light illumination reading softcopy images used for lesion detection of all ultrasound images showed the highest sensitivity, specificity, and AUC results at 10Lux. It was found that optimal use of 10Lux for ambient light illumination would provide optimal detection of lesions in ultrasound soft copy images. Based on the future research data, it will be presented as basic data for designing ambient light brightness of ultrasound imaging laboratory and reading room.

Implementation of a walking-aid light with machine vision-based pedestrian signal detection (머신비전 기반 보행신호등 검출 기능을 갖는 보행등 구현)

  • Jihun Koo;Juseong Lee;Hongrae Cho;Ho-Myoung An
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.31-37
    • /
    • 2024
  • In this study, we propose a machine vision-based pedestrian signal detection algorithm that operates efficiently even in computing resource-constrained environments. This algorithm demonstrates high efficiency within limited resources and is designed to minimize the impact of ambient lighting by sequentially applying HSV color space-based image processing, binarization, morphological operations, labeling, and other steps to address issues such as light glare. Particularly, this algorithm is structured in a relatively simple form to ensure smooth operation within embedded system environments, considering the limitations of computing resources. Consequently, it possesses a structure that operates reliably even in environments with low computing resources. Moreover, the proposed pedestrian signal system not only includes pedestrian signal detection capabilities but also incorporates IoT functionality, allowing wireless integration with a web server. This integration enables users to conveniently monitor and control the status of the signal system through the web server. Additionally, successful implementation has been achieved for effectively controlling 50W LED pedestrian signals. This proposed system aims to provide a rapid and efficient pedestrian signal detection and control system within resource-constrained environments, contemplating its potential applicability in real-world road scenarios. Anticipated contributions include fostering the establishment of safer and more intelligent traffic systems.