• Title/Summary/Keyword: multi-camera system

Search Result 477, Processing Time 0.028 seconds

Driver drowsiness recognition system based on camera image analysis (카메라 영상 분석 기반 운전자 졸음 인식 시스템)

  • Kim, Hyun-Suk;Choi, Min-Su;Bae, You-Suk
    • Annual Conference of KIPS
    • /
    • 2016.04a
    • /
    • pp.719-722
    • /
    • 2016
  • 운전자의 주의력 감쇠는 교통사고 요인에 있어서 큰 비중을 차지한다. 주의력 감쇠는 무선 통화, 기기 조작, 졸음으로 나타날 수 있는데 자동차 대형사고의 대부분은 졸음운전으로 인하여 일어나며, 졸음운전 시에는 운전자의 운전조작 및 방어 조작 능력이 현저하게 저하한다. 본 시스템은 카메라로부터 실시간으로 영상 데이터를 입력 받아 처리하여 운전자의 졸음 상태를 인식하는 시스템으로 운전자에게 졸음방지 기능을 제공한다. Haar-Like Feature cascade classifier 방법을 사용하여 얼굴 및 눈 영역 검출을 하였고 Open Eye, Closed Eye가 학습된 MLP(Multi-Layer Perceptron)를 이용해 눈 깜박임을 인식하여 PERCLOS(Percentage of Eye Close)방법으로 졸음을 판단하였다. 본 논문에서 제안한 방법의 인식률의 정확도를 검증하기 위해 인식률 테스트를 하였다.

Dual Autostereoscopic Display Platform for Multi-user Collaboration with Natural Interaction

  • Kim, Hye-Mi;Lee, Gun-A.;Yang, Ung-Yeon;Kwak, Tae-Jin;Kim, Ki-Hong
    • ETRI Journal
    • /
    • v.34 no.3
    • /
    • pp.466-469
    • /
    • 2012
  • In this letter, we propose a dual autostereoscopic display platform employing a natural interaction method, which will be useful for sharing visual data with users. To provide 3D visualization of a model to users who collaborate with each other, a beamsplitter is used with a pair of autostereoscopic displays, providing a visual illusion of a floating 3D image. To interact with the virtual object, we track the user's hands with a depth camera. The gesture recognition technique we use operates without any initialization process, such as specific poses or gestures, and supports several commands to control virtual objects by gesture recognition. Experiment results show that our system performs well in visualizing 3D models in real-time and handling them under unconstrained conditions, such as complicated backgrounds or a user wearing short sleeves.

Development of KITSAT-3 camera and current status of the operation (우리별 3호 지구관측 카메라 개발 및 운용 현황)

  • 이준호;유상근
    • Korean Journal of Optics and Photonics
    • /
    • v.12 no.5
    • /
    • pp.382-388
    • /
    • 2001
  • KITSAT-3, launched at May 26 1999, has an earth observation optical payload named MEIS (Multi-spectral Earth Imaging System). The MEIS is a Managin mirror telescope of aperture size of 95mm, and it images the ground with the ground sampling distance of 13.8m over 48km at the altitude of 720km using three different observations bands. This paper first presents the design and then the optics, relating results of manufacturing, integration and test. Finally it briefly discusses the current status of MEIS operation.

  • PDF

A 3D Object Tracking System Using a Multi-camera (다중 카메라를 이용한 3차원 개체 추적 시스템)

  • Lee, Sang-Geol;Koo, Kyung-Mo;Seo, Young-Wook;Cha, Eui-Young
    • Annual Conference of KIPS
    • /
    • 2004.05a
    • /
    • pp.781-784
    • /
    • 2004
  • 본 시스템은 어항속에 있는 물고기 움직임을 추적하기 위해 두 대의 카메라로부터 동시에 독립된 영상을 획득하고 획득된 영상을 처리하여 좌표를 얻어내고 3차원 좌표로 생성해내는 시스템이다. 제안하는 방법은 크게 두 대의 카메라로부터 동시에 영상을 획득하는 방법과 획득된 영상에 대한 처리 및 물체 위치 검출, 그리고 3차원 좌표 생성으로 구성된다. Frame grabber를 사용하여 두 개의 카메라로부터 동시에 영상을 획득하며, 3개의 연속된 프레임에 대한 차영상과 ART2(Adaptive Resonance Theory)를 이용하여 각각의 영상에서의 물고기 위치를 검출한다. 검출된 각각의 좌표를 병합하여 3차원 좌표를 생성하며, 추적 결과는 OpenGL을 이용하여 3차원으로 재생한다.

  • PDF

Human and Robot Tracking Using Histogram of Oriented Gradient Feature

  • Lee, Jeong-eom;Yi, Chong-ho;Kim, Dong-won
    • Journal of Platform Technology
    • /
    • v.6 no.4
    • /
    • pp.18-25
    • /
    • 2018
  • This paper describes a real-time human and robot tracking method in Intelligent Space with multi-camera networks. The proposed method detects candidates for humans and robots by using the histogram of oriented gradients (HOG) feature in an image. To classify humans and robots from the candidates in real time, we apply cascaded structure to constructing a strong classifier which consists of many weak classifiers as follows: a linear support vector machine (SVM) and a radial-basis function (RBF) SVM. By using the multiple view geometry, the method estimates the 3D position of humans and robots from their 2D coordinates on image coordinate system, and tracks their positions by using stochastic approach. To test the performance of the method, humans and robots are asked to move according to given rectangular and circular paths. Experimental results show that the proposed method is able to reduce the localization error and be good for a practical application of human-centered services in the Intelligent Space.

High-precision Skeleton Extraction Method using Multi-view Camera System (다시점 카메라 시스템을 이용한 고정밀 스켈레톤 추출 기법)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.297-299
    • /
    • 2020
  • 본 논문에서는 다시점 카메라 시스템을 통해 실사기반의 3D 모델을 획득하여 모션센서와 같은 별도의 기기 없이 해당 모델에 대한 고정밀 스켈레톤 추출 기법에 대해서 제시한다. 다시점 카메라 시스템을 이용하여 생성한 3D 모델을 앞, 뒤, 좌, 우 각 위치에서의 사상 매트릭스로 사상 영상을 생성하고 딥러닝 기술을 이용하여 2D 스켈레톤을 추출한다. 그리고 사상 매트릭스의 역변환 과정을 통해 2D 스켈레톤의 삼차원 좌표를 계산하고 추가적인 후처리를 통해 고정밀 스켈레톤을 획득한다.

  • PDF

Design of FPGA Camera Module with AVB based Multi-viewer for Bus-safety (AVB 기반의 버스안전용 멀티뷰어의 FPGA 카메라모듈 설계)

  • Kim, Dong-jin;Shin, Wan-soo;Park, Jong-bae;Kang, Min-goo
    • Journal of Internet Computing and Services
    • /
    • v.17 no.4
    • /
    • pp.11-17
    • /
    • 2016
  • In this paper, we proposed a multi-viewer system with multiple HD cameras based AVB(Audio Video Bridge) ethernet cable using IP networking, and FPGA(Xilinx Zynq 702) for bus safety systems. This AVB (IEEE802.1BA) system can be designed for the low latency based on FPGA, and transmit real-time with HD video and audio signals in a vehicle network. The proposed multi-viewer platform can multiplex H.264 video signals from 4 wide-angle HD cameras with existed ethernet 1Gbps. and 2-wire 100Mbps cables. The design of Zynq 702 based low latency to H.264 AVC CODEC was proposed for the minimization of time-delay in the HD video transmission of car area network, too. And the performance of PSNR(Peak Signal-to-noise-ratio) was analyzed with the reference model JM for encoding and decoding results in H.264 AVC CODEC. These PSNR values can be confirmed according the theoretical and HW result from the signal of H.264 AVC CODEC based on Zynq 702 the multi-viewer with multiple cameras. As a result, proposed AVB multi-viewer platform with multiple cameras can be used for the surveillance of audio and video around a bus for the safety due to the low latency of H.264 AVC CODEC design.

MSC(Multi-Spectral Camera) 열제어 시스템 소개

  • Kong, Jong-Pil;Heo, Haeng-Pal;Kim, Young-Sun;Park, Jong-Euk;Jang, Young-Jun
    • Aerospace Engineering and Technology
    • /
    • v.4 no.2
    • /
    • pp.107-116
    • /
    • 2005
  • As a unique payload of Komsat-2, MSC, comprising EOS(Electro-Optical Sub-system), PMU(Payload Management Unit) and PDTS(Payload Data Transmission Sub-system), is supposed to take pictures of one panchromatic and 4 multi-spectral image between wavelength 450mm~900mm, and is being under final Satellite I&T. It will perform the earth remote sensing with applications such as acquisition of high resolution images, surveillance of large scale disasters and its countermeasure, survey of natural resources, etc.. Under the hostile influence of the extreme space environmental conditions due to deep space and direct solar flux, the thermal design is especially of major importance in designing a payload. There are tight temperature range restrictions for electro-optical elements while on the other hand there are low power consumption requirements due to the limited energy source on the spacecraft. This paper describes details of thermal control system for MSC.

  • PDF

Real-Time Compressed Video Acquisition System for Stereo 360 VR (Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템)

  • Choi, Minsu;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.965-973
    • /
    • 2019
  • In this paper, Stereo 4K@60fps 360 VR real-time video capture system which consists of video stream capture, video encoding and stitching module is been designed. The system captures stereo 4K@60fps 360 VR video by stitching 6 of 2K@60fps stream which are captured through HDMI interface from 6 cameras in real-time. In video capture phase, video is captured from each camera using multi-thread in real-time. In video encoding phase, raw frame memory transmission and parallel encoding are used to reduce the resource usage in data transmission between video capture and video stitching modules. In video stitching phase, Real-time stitching is secured by stitching calibration preprocessing.

KOMPSAT Data Processing System: An Overview and Preliminary Acceptance Test Results

  • Kim, Yong-Seung;Kim, Youn-Soo;Lim, Hyo-Suk;Lee, Dong-Han;Kang, Chi-Ho
    • Korean Journal of Remote Sensing
    • /
    • v.15 no.4
    • /
    • pp.357-365
    • /
    • 1999
  • The optical sensors of Electro-Optical Camera (EOC) and Ocean Scanning Multi-spectral Imager (OSMI) aboard the KOrea Multi-Purpose SATellite (KOMPSAT) will be placed in a sun synchronous orbit in late 1999. The EOC and OSMI sensors are expected to produce the land mapping imagery of Korean territory and the ocean color imagery of world oceans, respectively. Utilization of the EOC and OSMI data would encompass the various fields of science and technology such as land mapping, land use and development, flood monitoring, biological oceanography, fishery, and environmental monitoring. Readiness of data support for user community is thus essential to the success of the KOMPSAT program. As a part of testing such readiness prior to the KOMPSAT launch, we have performed the preliminary acceptance test for the KOMPSAT data processing system using the simulated EOC and OSMI data sets. The purpose of this paper is to demonstrate the readiness of the KOMPSAT data processing system, and to help data users understand how the KOMPSAT EOC and OSMI data are processed, archived, and provided. Test results demonstrate that all requirements described in the data processing specification have been met, and that the image integrity is maintained for all products. It is however noted that since the product accuracy is limited by the simulated sensor data, any quantitative assessment of image products can not be made until actual KOMPSAT images will be acquired.