• Title/Summary/Keyword: 적외선 통신

Search Result 319, Processing Time 0.027 seconds

The design of 4s-van for GIS DB construction (GIS DB 구축을 위한 4S-VAN 설계)

  • Lee, Seung-Yong;Kim, Seong-Baek;Lee, Jong-Hun
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.10 no.3 s.21
    • /
    • pp.89-97
    • /
    • 2002
  • We have developed the 45-Van system in order to maximize the interoperability of spatial data in 45(GNSS, SIIS, GIS, ITS) by sharing and providing spatial data of remote site. The 4S-Van system enables to acquisition and production of information for GIS database and the accurate position information by combining and connecting GPS/IMU, laser, CCD(charged-coupled device) image, and wireless telecommunication technology. That is, 4S-Van system measures its position and attitude using integrated GPS/IMU and takes two photographs of the front scene by two CCD cameras, analyzes position of objects by space intersection method, and constructs database that has compatibility with existing vector database system. Furthermore, infrared camera and wireless communication technique can be applied to the 4S-Van for a variety of applications. In this paper, we discuss the design and functions of 4S-Van that is equipped with GPS, CCD camera, and IMU.

  • PDF

Study on the line tracer robot applying the intellectual PID (지적 PID를 적용한 라인 트레이스 로봇에 관한 연구)

  • Lee, Dong-Heon;Kim, Min;Jeong, Jae-Hoon;Park, Won-Hyeon;Choi, Myoung-Hoon;Lim, Jae-Jun;Byun, Gi-Sik;Kim, Gwan-Hyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.731-733
    • /
    • 2016
  • The primary goal of the line tracer is to accurately and quickly detect the movement up to the target position given by the sensor juhaengseon. It has been used in applications in various fields such as the current unmanned transport vehicles, laser cutting machine, autonomous mobile robots and unmanned driving is possible, and is held annually at various universities in the competition field with the possibility of great progress, depending on the application. However, there arises a large difference in running performance, depending on the hardware design and control. In this paper, improving the characteristics of the tracer line and characters to design a PID controller is to apply the point on ways of improving the properties of the system.

  • PDF

Wearable Sensor-based Navigator Lookout Pattern Analysis Method (웨어러블 센서를 활용한 선박 항해사의 항해당직 패턴 분석 기법 연구)

  • Youn, Ik-Hyun;Kim, Sung-Cheol;Hwang, Tae Woong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.558-561
    • /
    • 2018
  • Human errors have known as a majority of maritime navigational accidents such as collision and grounding. A large number of relevant research applied indirect research methods such as survey and interview. The research methods are limited to collect objective data regarding human errors due to its nature. Therefore, the purpose of this study is to improve the limitation of human error measurement of navigators by applying wearable sensors. Infrared sensors by using a 3-D printer to accommodate the special environment of a ship were developed for the study. As results, a significant reliance on the Integrated Navigation System including Electronic Chart Display and Information System (ECDIS) and Radar. The results are expected to motivate further research to investigate human errors of ship navigators to reduce the maritime navigational accidents.

  • PDF

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Design Adaptive Acoustic Echo Canceller for AVRCP Bluetooth Hands Free (AVRCP 기반의 블루투스 핸즈프리를 위한 음향반향제거기 설계)

  • Jung, Yong-Gyu;Kim, Kyeong-Woong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.4
    • /
    • pp.33-38
    • /
    • 2009
  • Recently most of A/V devices are being adopted hi-tech of A VRCP and changed to mobile environment. AVRCP is designed to provide a standard interface to control A/V devices, hi-fi equipment, or others to allow a single remote control or other device to control all the A/V equipment to which a user has access. It may be used in concert with A2DP or VDP. The AVRCP defines two roles, that of a controller and target device. In AVRCP, the controller translates the detected user action to the A/V control signal, and then transmits it to a remote Bluetooth enabled device. It can be small and mixed mobile devices using bluetooth technologies. The functions available for a conventional infrared remote controller can be realized in this protocol. The remote control described in this protocol is designed specifically for A/V control only. We designed new DSP using Adaptive Acoustic Echo Canceller for bluetooth hands free based this AVRCP standard specifications.

  • PDF

A Study on the Effective Management of Image Stage Gauge System (영상수위계 시스템의 효율적 운영에 관한 연구)

  • Kwon, Sung-Ill;Kim, Won;Lee, Chan-Joo;Kim, Dong--Gu
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2010.05a
    • /
    • pp.1916-1920
    • /
    • 2010
  • 영상수위계는 카메라에 의해서 수위표를 촬영하여 촬영된 영상을 처리하여 수위값으로 변환하여 자동적으로 수위를 측정하는 장비이다. 이 수위계는 기존 수위측정 장비인 부자식, 압력식, 기포식, 초음파식, 레이다식과는 달리 수위표를 촬영한 영상으로부터 수위를 직접 눈으로 확인할 수 있는 장점이 있다. 이로 인해 영상자료로부터 측정된 수위를 검증할 수 있어 수위측정의 정확도를 향상시킬 수 있다. 그리고 수위표 영상과 더불어 관측지점 주변의 전체 영상을 동시에 촬영하여 실시간으로 전송하기 때문에 홍수시 하천 상황에 대한 모니터링 목적으로 사용될 수 있다. 영상수위계 시스템은 크게 메인제어기, 전원부, 서버부 및 카메라부로 구성되어 있다. 현재 운영되고 있는 시스템은 전원장치 리셋시 전 시스템을 리셋해야하고, 상전 단전시 전 시스템이 off된다. 그리고 별도의 통신모듈을 사용하여 장비간 통신이 이루어지고 있다. 또한 카메라부에는 렌즈와 팬/틸트를 제어하기 위한 별도의 장비가 포함되어 있고, 백색 LED 조명이 사용되어 야간에 수위인식주기마다 조명이 on/off 되고 있다. 위와 같은 전원장치의 운영으로 시스템을 안정적으로 운영할 수 없다. 그리고 수위 오인식을 최소화하기 위해서는 연속적인 수위 인식이 필요하지만, 백색 LED조명과 1초에 2프레임을 캡쳐하는 비디오 캡쳐방식에 의해 시스템을 상시로 운영하는 것이 곤란하다. 현재 운영 중인 영상수위계를 안정적으로 운영할 수 있도록 장비별로 제어가 가능하도록 전원 제어장치를 개발하였고, 상전 단전시 최소 30분 정도 전원을 유지할 수 있도록 무정전전원장치(UPS)를 설치하였으며, 측정자료의 저장장치를 하드디스크 타입에서 Flash SSD 메모리 타입으로 교체하였다. 또한 영상수위계 시스템을 상시 운영할 수 있도록 백색 LED조명을 적외선 LED조명으로 교체하였고, 1초에 1회 수위를 인식하도록 수위인식주기와 1초에 25프레임 캡쳐할 수 있도록 비디오 캡쳐방식을 개선하였다. 위와 같은 시스템의 개선으로 시스템을 안정적으로 운영할 수 있게 되어 시스템 고장에 의해 발생하는 수위 결측을 감소시킬 수 있고, 시스템의 상시 운영으로 수 위 오인식을 최소화시킬 수 있을 것으로 판단된다.

  • PDF

A Calibration Method for Multimodal dual Camera Environment (멀티모달 다중 카메라의 영상 보정방법)

  • Lim, Su-Chang;Kim, Do-Yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.19 no.9
    • /
    • pp.2138-2144
    • /
    • 2015
  • Multimodal dual camera system has a stereo-like configuration equipped with an infrared thermal and optical camera. This paper presents stereo calibration methods on multimodal dual camera system using a target board that can be recognized by both thermal and optical camera. While a typical stereo calibration method usually performed with extracted intrinsic and extrinsic camera parameter, consecutive image processing steps were applied in this paper as follows. Firstly, the corner points were detected from the two images, and then the pixel error rate, the size difference, the rotation degree between the two images were calculated by using the pixel coordinates of detected corner points. Secondly, calibration was performed with the calculated values via affine transform. Lastly, result image was reconstructed with mapping regions on calibrated image.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • Kim, Tae-Woo;Kang, Yong-Seok
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.3
    • /
    • pp.53-60
    • /
    • 2009
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking; and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Real Time Eye and Gaze Tracking (실시간 눈과 시선 위치 추적)

  • 이영식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.477-483
    • /
    • 2004
  • This paper describes preliminary results we have obtained in developing a computer vision system based on active IR illumination for real time gaze tracking for interactive graphic display. Unlike most of the existing gaze tracking techniques, which often require assuming a static head to work well and require a cumbersome calibration process for each person our gaze tracker can perform robust and accurate gaze estimation without calibration and under rather significant head movement. This is made possible by a new gaze calibration procedure that identifies the mapping from pupil parameters to screen coordinates using the Generalized Regression Neural Networks(GRNN). With GRNN, the mapping does not have to be an analytical function and head movement is explicitly accounted for by the gaze mapping function. Futhermore, the mapping function can generalize to other individuals not used in the training. The effectiveness of our gaze tracker is demonstrated by preliminary experiments that involve gaze-contingent interactive graphic display.