• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.028 seconds

A Study on the Restoration of a Low-Resoltuion Iris Image into a High-Resolution One Based on Multiple Multi-Layered Perceptrons (다중 다층 퍼셉트론을 이용한 저해상도 홍채 영상의 고해상도 복원 연구)

  • Shin, Kwang-Yong;Kang, Byung-Jun;Park, Kang-Ryoung;Shin, Jae-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.3
    • /
    • pp.438-456
    • /
    • 2010
  • Iris recognition uses a unique iris pattern of user to identify person. In order to enhance the performance of iris recognition, it is reported that the diameter of iris region should be greater than 200 pixels in the captured iris image. So, the previous iris system used zoom lens camera, which can increase the size and cost of system. To overcome these problems, we propose a new method of enhancing the accuracy of iris recognition on low-resolution iris images which are captured without a zoom lens. This research is novel in the following two ways compared to previous works. First, this research is the first one to analyze the performance degradation of iris recognition according to the decrease of the image resolution by excluding other factors such as image blurring and the occlusion of eyelid and eyelash. Second, in order to restore a high-resolution iris image from single low-resolution one, we propose a new method based on multiple multi-layered perceptrons (MLPs) which are trained according to the edge direction of iris patterns. From that, the accuracy of iris recognition with the restored images was much enhanced. Experimental results showed that when the iris images down-sampled by 6% compared to the original image were restored into the high resolution ones by using the proposed method, the EER of iris recognition was reduced as much as 0.133% (1.485% - 1.352%) in comparison with that by using bi-linear interpolation

An Illumination-Robust Driver Monitoring System Based on Eyelid Movement Measurement (조명에 강인한 눈꺼풀 움직임 측정기반 운전자 감시 시스템)

  • Park, Il-Kwon;Kim, Kwang-Soo;Park, Sangcheol;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.3
    • /
    • pp.255-265
    • /
    • 2007
  • In this paper, we propose a new illumination-robust drowsy driver monitoring system with single CCD(Charge Coupled Device) camera for intelligent vehicle in the day and night. For this system that is monitoring driver's eyes during a driving, the eye detection and the measure of eyelid movement are the important preprocesses. Therefore, we propose efficient illumination compensation algorithm to improve the performance of eye detection and also eyelid movement measuring method for efficient drowsy detection in various illumination. For real-time application, Cascaded SVM (Cascaded Support Vector Machine) is applied as an efficient eye verification method in this system. Furthermore, in order to estimate the performance of the proposed algorithm, we collect video data about drivers under various illuminations in the day and night. Finally, we acquired average eye detection rate of over 98% about these own data, and PERCLOS(The percentage of eye-closed time during a period) are represented as drowsy detection results of the proposed system for the collected video data.

IGRINS First Light Instrumental Performance

  • Park, Chan;Yuk, In-Soo;Chun, Moo-Young;Pak, Soojong;Kim, Kang-Min;Pavel, Michael;Lee, Hanshin;Oh, Heeyoung;Jeong, Ueejeong;Sim, Chae Kyung;Lee, Hye-In;Le, Huynh Anh Nguyen;Strubhar, Joseph;Gully-Santiago, Michael;Oh, Jae Sok;Cha, Sang-Mok;Moon, Bongkon;Park, Kwijong;Brooks, Cynthia;Ko, Kyeongyeon;Han, Jeong-Yeol;Nah, Jakyuong;Hill, Peter C.;Lee, Sungho;Barnes, Stuart;Park, Byeong-Gon;T., Daniel
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.39 no.1
    • /
    • pp.52.2-52.2
    • /
    • 2014
  • The Immersion Grating Infrared Spectrometer (IGRINS) is an unprecedentedly minimized infrared cross-dispersed echelle spectrograph with a high-resolution and high-sensitivity optical performance. A silicon immersion grating features the instrument for the first time in this field. IGRINS will cover the entire portion of the wavelength range between 1.45 and $2.45{\mu}m$ accessible from the ground in a single exposure with spectral resolution of 40,000. Individual volume phase holographic (VPH) gratings serve as cross-dispersing elements for separate spectrograph arms covering the H and K bands. On the 2.7m Harlan J. Smith telescope at the McDonald Observatory, the slit size is $1^{\prime\prime}{\times}15^{\prime\prime}$. IGRINS has a $0.27^{\prime\prime}$ pixel-1 plate scale on a $2048{\times}2048$ pixel Teledyne Scientific & Imaging HAWAII-2RG detector with SIDECAR ASIC cryogenic controller. The instrument includes four subsystems; a calibration unit, an input relay optics module, a slit-viewing camera, and nearly identical H and K spectrograph modules. The use of a silicon immersion grating and a compact white pupil design allows the spectrograph collimated beam size to be 25mm, which permits the entire cryogenic system to be contained in a moderately sized rectangular vacuum chamber. The fabrication and assembly of the optical and mechanical hardware components were completed in 2013. In this presentation, we describe the major design characteristics of the instrument and the early performance estimated from the first light commissioning at the McDonald Observatory.

  • PDF

A Study on Controlling IPTV Interface Based on Tracking of Face and Eye Positions (얼굴 및 눈 위치 추적을 통한 IPTV 화면 인터페이스 제어에 관한 연구)

  • Lee, Won-Oh;Lee, Eui-Chul;Park, Kang-Ryoung;Lee, Hee-Kyung;Park, Min-Sik;Lee, Han-Kyu;Hong, Jin-Woo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.6B
    • /
    • pp.930-939
    • /
    • 2010
  • Recently, many researches for making more comfortable input device based on gaze detection have been vigorously performed in human computer interaction. However, these previous researches are difficult to be used in IPTV environment because these methods need additional wearing devices or do not work at a distance. To overcome these problems, we propose a new way of controlling IPTV interface by using a detected face and eye positions in single static camera. And although face or eyes are not detected successfully by using Adaboost algorithm, we can control IPTV interface by using motion vectors calculated by pyramidal KLT (Kanade-Lucas-Tomasi) feature tracker. These are two novelties of our research compared to previous works. This research has following advantages. Different from previous research, the proposed method can be used at a distance about 2m. Since the proposed method does not require a user to wear additional equipments, there is no limitation of face movement and it has high convenience. Experimental results showed that the proposed method could be operated at real-time speed of 15 frames per second. Wd confirmed that the previous input device could be sufficiently replaced by the proposed method.

Face and Hand Tracking using MAWUPC algorithm in Complex background (복잡한 배경에서 MAWUPC 알고리즘을 이용한 얼굴과 손의 추적)

  • Lee, Sang-Hwan;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.39-49
    • /
    • 2002
  • This paper proposes the MAWUPC (Motion Adaptive Weighted Unmatched Pixel Count) algorithm to track multiple objects of similar color The MAWUPC algorithm has the new method that combines color and motion effectively. We apply the MAWUPC algorithm to face and hand tracking against complex background in an image sequence captured by using single camera. The MAWUPC algorithm is an improvement of previously proposed AWUPC (Adaptive weighted Unmatched Pixel Count) algorithm based on the concept of the Moving Color that combines effectively color and motion information. The proposed algorithm incorporates a color transform for enhancing a specific color, the UPC(Unmatched Pixel Count) operation for detecting motion, and the discrete Kalman filter for reflecting motion. The proposed algorithm has advantages in reducing the bad effect of occlusion among target objects and, at the same time, in rejecting static background objects that have a similar color to tracking objects's color. This paper shows the efficiency of the proposed MAWUPC algorithm by face and hands tracking experiments for several image sequences that have complex backgrounds, face-hand occlusion, and hands crossing.

A Euclidean Reconstruction of 3D Face Data Using a One-Shot Absolutely Coded Pattern (단일 투사 절대 코드 패턴을 이용한 3차원 얼굴 데이터의 유클리디안 복원)

  • Kim, Byoung-Woo;Yu, Sun-Jin;Lee, Sang-Youn
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.6
    • /
    • pp.133-140
    • /
    • 2005
  • This paper presents a rapid face shape acquisition system. The system is composed of two cameras and one projector. The technique works by projecting a pattern on the object and capturing two images with two cameras. We use a 'one shot' system which provides 3D data acquired by single image per camera. The system is good for rapid data acquisition as our purpose. We use the 'absolutely coded pattern' using the hue and saturation of pattern lines. In this 'absolutely coded pattern' all patterns have absolute identification numbers. We solve the correspondence problem between the two images by using epipolar geometry and absolute identification numbers. In comparison to the 'relatively coded pattern' which uses relative identification numbers, the 'absolutely coded pattern' helps obtain rapid 3D data by one to one point matching on an epipolar line. Because we use two cameras, we obtain two images which have similar hue and saturation. This enables us to have the same absolute identification numbers in both images, and we can use the absolutely coded pattern for solving the correspondence problem. The proposed technique is applied to face data and the total time for shape acquisition is estimated.

AN IN-VITRO EVALUATION OF SEALER PLACEMENT METHODS IN SIMULATED ROOT CANAL EXTENSIONS (근관 내 불규칙 확장부에서 sealer 적용방법에 따른 충전 효과 평가)

  • Kim, Sung-Young;Lee, Mi-Jeong;Moon, Jang-Won;Lee, Se-Joon;Yu, Mi-Kyung
    • Restorative Dentistry and Endodontics
    • /
    • v.30 no.1
    • /
    • pp.31-37
    • /
    • 2005
  • The aim of this study was to evaluate the effectiveness of sealer placement in simulated root canal extensions. Forty resin blocks were attained from the Endo-training Bloc. In each block. The simulated root canal was made with $\#20$, 80taper GT file. After each block was longitudinally split into two halves, a standardized groove was prepared on one canal wall of two halves to simulate the canal extensions with various irregularities. The two halves of each block were assembled and all simulated root canals were obturated by single cone method with AH26 sealer. Four different methods of sealer placement were used: group A, $\#20$ K-file; group B, ultrasonic file; group C, lentulo spiral; group D, EZ-Fill bi-directional spiral. All obturated blocks were stored in $100\%$ humidity at $37^{\circ}C$ for 1 week, Using a low speed saw, each block was sectioned horizontally. Images of the sections were taken using a stereomicroscope at $\times$ 30 magnification and a digital camera. The amount of the sealer in the groove was evaluated using a scoring system, a higher score indicated better sealing effectiveness. The data was statistically analysed by Fisher's Exact Test. The sealing score was the lowest, specially at the middle area of canal extensions in group A, and that was statistically significant difference from other groups. In conclusion, the ultrasonic file, lentulo spiral and EZ-Fill bi-directional spiral were effective methods of sealer placement in simulated canal extensions. The K file was the least effective method, specially at the middle area of canal extensions.

Design and Implementation of CW Radar-based Human Activity Recognition System (CW 레이다 기반 사람 행동 인식 시스템 설계 및 구현)

  • Nam, Jeonghee;Kang, Chaeyoung;Kook, Jeongyeon;Jung, Yunho
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.5
    • /
    • pp.426-432
    • /
    • 2021
  • Continuous wave (CW) Doppler radar has the advantage of being able to solve the privacy problem unlike camera and obtains signals in a non-contact manner. Therefore, this paper proposes a human activity recognition (HAR) system using CW Doppler radar, and presents the hardware design and implementation results for acceleration. CW Doppler radar measures signals for continuous operation of human. In order to obtain a single motion spectrogram from continuous signals, an algorithm for counting the number of movements is proposed. In addition, in order to minimize the computational complexity and memory usage, binarized neural network (BNN) was used to classify human motions, and the accuracy of 94% was shown. To accelerate the complex operations of BNN, the FPGA-based BNN accelerator was designed and implemented. The proposed HAR system was implemented using 7,673 logics, 12,105 registers, 10,211 combinational ALUTs, and 18.7 Kb of block memory. As a result of performance evaluation, the operation speed was improved by 99.97% compared to the software implementation.

Red fluorescence of oral bacteria interacting with Porphyromonas gingivalis (Porphyromonas gingivalis가 일부 구강미생물의 형광 발현에 미치는 영향)

  • Kim, Se-Yeon;Woo, Dong-Hyeob;Lee, Min-Ah;Kim, Ji-Soo;Lee, Jung-Ha;Jeong, Seung-Hwa
    • Journal of Korean Academy of Oral Health
    • /
    • v.41 no.1
    • /
    • pp.22-27
    • /
    • 2017
  • Objectives: Dental plaque is composed of 700 bacterial species. It is known that some oral microorganisms produce porphyrin, and thus, they emit red fluorescence when illuminated with blue light at a specific wavelength of <410 nm. Porphyromonas gingivalis belongs to the genus Porphyromonas, which is characterized by the production of porphyrin. The aim of this study was to evaluate red fluorescence emission of some oral microorganisms interacting with P. gingivalis. Methods: Five bacterial strains (P. gingivalis, Streptococcus mutans, Lactobacillus casei, Actinomyces naeslundii, and Fusobacterium nucleatum) were used for this study. Tryptic soy agar medium supplemented with hemin, vitamin K3, and sheep blood was used as a growth medium. The fluorescence emission of bacterial colonies was evaluated under 405 nm-wavelength blue light using a Quantitative Light-induced Fluorescence Digital (QLF-D) camera system. Each bacterium was cultured alone and co-cultured in close proximity with P. gingivalis. The red/green (R/G) ratio of fluorescence image was calculated and the differences of R/G ratio according to each growth condition were compared using the Mann-Whitney test (P<0.05). Results: Single cultured S. mutans, L. casei and A. naeslundii colonies emitted red fluorescence (R/G ratio=$2.15{\pm}0.06$, $4.31{\pm}0.17$, $5.52{\pm}1.29$, respectively). Fusobacterium nucleatum colonies emitted green fluorescence (R/G ratio=$1.36{\pm}0.06$). The R/G ratios of A. naeslundii and F. nucleatum were increased when P. gingivalis was co-cultured with each bacterium (P<0.05). In contrast, the R/G ratios of S. mutans and L. casei were decreased when P. gingivalis was co-cultured with each bacterium (P=0.002, 0.003). Conclusions: This study confirmed that P. gingivalis could affect the red fluorescence of other oral bacteria under 405 nm-wavelength blue light. Our findings concluded that P. gingivalis has an important role for red fluorescence emission of dental biofilm.

Human Skeleton Keypoints based Fall Detection using GRU (PoseNet과 GRU를 이용한 Skeleton Keypoints 기반 낙상 감지)

  • Kang, Yoon Kyu;Kang, Hee Yong;Weon, Dal Soo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.2
    • /
    • pp.127-133
    • /
    • 2021
  • A recent study of people physically falling focused on analyzing the motions of the falls using a recurrent neural network (RNN) and a deep learning approach to get good results from detecting 2D human poses from a single color image. In this paper, we investigate a detection method for estimating the position of the head and shoulder keypoints and the acceleration of positional change using the skeletal keypoints information extracted using PoseNet from an image obtained with a low-cost 2D RGB camera, increasing the accuracy of judgments about the falls. In particular, we propose a fall detection method based on the characteristics of post-fall posture in the fall motion-analysis method. A public data set was used to extract human skeletal features, and as a result of an experiment to find a feature extraction method that can achieve high classification accuracy, the proposed method showed a 99.8% success rate in detecting falls more effectively than a conventional, primitive skeletal data-use method.