• Title/Summary/Keyword: monitoring camera

Search Result 747, Processing Time 0.027 seconds

감시 대상의 위치 추정을 통한 감시 시스템의 에너지 효율적 운영 방법 (An Energy-Efficient Operating Scheme of Surveillance System by Predicting the Location of Targets)

  • 이가욱;이수빈;이호원;조동호
    • 한국통신학회논문지
    • /
    • 제38C권2호
    • /
    • pp.172-180
    • /
    • 2013
  • 본 논문에서는 DSRC(Dedicated Short Range Communication)를 이용한 감시 대상의 검출을 통해 동작하는 카메라 기반의 감시 시스템 환경에서 저장 공간과 운영 전력 등 소비되는 자원을 절약하면서도 더 높은 사건 보고율을 달성할 수 있는 에너지 효율적 감시 카메라 운영 방법을 제시한다. 제안하는 감시 카메라 운영 방법은 입/출(入/出) 특성을 포함한 도로 환경과 카메라의 시야각을 추상화한 모델 정보와 감시 대상에 부착된 DSRC 단말로부터 수집되는 차량의 속도 벡터 정보를 고려하여 해당 감시 대상을 완벽하게 촬영하기 위해 사용되어야 할 카메라의 개수를 계산하고, 순차적으로 작동/종료함으로써 감시 시스템에서 사용되는 자원을 절약한다. 또한, 기존 감시 시스템의 운영 방식과 제안 방식의 성능 비교를 위한 모의실험을 수행하여 감시 시스템 운영비용의 절감 효과를 보였다.

비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어 (Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation)

  • 진태석;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제11권4호
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Selection of Optimal Vegetation Indices and Regression Model for Estimation of Rice Growth Using UAV Aerial Images

  • Lee, Kyung-Do;Park, Chan-Won;So, Kyu-Ho;Na, Sang-Il
    • 한국토양비료학회지
    • /
    • 제50권5호
    • /
    • pp.409-421
    • /
    • 2017
  • Recently Unmanned Aerial Vehicle (UAV) technology offers new opportunities for assessing crop growth condition using UAV imagery. The objective of this study was to select optimal vegetation indices and regression model for estimating of rice growth using UAV images. This study was conducted using a fixed-wing UAV (Model : Ebee) with Cannon S110 and Cannon IXUS camera during farming season in 2016 on the experiment field of National Institute of Crop Science. Before heading stage of rice, there were strong relationships between rice growth parameters (plant height, dry weight and LAI (Leaf Area Index)) and NDVI (Normalized Difference Vegetation Index) using natural exponential function ($R{\geq}0.97$). After heading stage, there were strong relationships between rice dry weight and NDVI, gNDVI (green NDVI), RVI (Ratio Vegetation Index), CI-G (Chlorophyll Index-Green) using quadratic function ($R{\leq}-0.98$). There were no apparent relationships between rice growth parameters and vegetation indices using only Red-Green-Blue band images.

센서 구성을 고려한 비전 기반 차선 감지 시스템 개발 (Development of A Vision-based Lane Detection System with Considering Sensor Configuration Aspect)

  • 박재학;홍대건;허건수;박장현;조동일
    • 한국자동차공학회논문집
    • /
    • 제13권4호
    • /
    • pp.97-104
    • /
    • 2005
  • Vision-based lane sensing systems require accurate and robust sensing performance in lane detection. Besides, there exists trade-off between the computational burden and processor cost, which should be considered for implementing the systems in passenger cars. In this paper, a stereo vision-based lane detection system is developed with considering sensor configuration aspects. An inverse perspective mapping method is formulated based on the relative correspondence between the left and right cameras so that the 3-dimensional road geometry can be reconstructed in a robust manner. A new monitoring model for estimating the road geometry parameters is constructed to reduce the number of the measured signals. The selection of the sensor configuration and specifications is investigated by utilizing the characteristics of standard highways. Based on the sensor configurations, it is shown that appropriate sensing region on the camera image coordinate can be determined. The proposed system is implemented on a passenger car and verified experimentally.

디지털 영상 계측을 위한 이미지 최적화 연구 (A Study on the Image Optimization for Digital Vision Measurement)

  • 김광염;윤효관;김창용;임성빈;최창호;이승도
    • 터널과지하공간
    • /
    • 제20권6호
    • /
    • pp.421-433
    • /
    • 2010
  • 암반면 평가를 위한 디지털 영상 계측 시 획득된 영상정보는 조명의 광량과 조명색 그리고 카메라의 촬영조건 등에 따라 달라지며, 왜곡된 영상정보는 객관적인 암반 평가를 어렵게 하는 주요한 원인이 된다. 본 연구에서는 다양한 설정 조건 하에서 획득된 디지털 영상정보의 색보정을 통해 자연광 상태에서의 고유한 영상정보로 복원하기 위한 실험 및 분석을 수행하였으며, 최종적으로 영상정보 최적화를 위한 조명 조건 및 카메라 설정 방안을 제시하였다.

Internet-based Real-time Obstacle Avoidance of a Mobile Robot

  • Ko Jae-Pyung;Lee Jang-Myung
    • Journal of Mechanical Science and Technology
    • /
    • 제19권6호
    • /
    • pp.1290-1303
    • /
    • 2005
  • In this research, a remote control system has been developed and implemented, which combines autonomous obstacle avoidance in real-time with force-reflective tele-operation. A tele-operated mobile robot is controlled by a local two-degrees-of-freedom force-reflective joystick that a human operator holds while he is monitoring the screen. In the system, the force-reflective joystick transforms the relation between a mobile robot and the environment to the operator as a virtual force which is generated in the form of a new collision vector and reflected to the operator. This reflected force makes the tele-operation of a mobile robot safe from collision in an uncertain and obstacle-cluttered remote environment. A mobile robot controlled by a local operator usually takes pictures of remote environments and sends the images back to the operator over the Internet. Because of limitations of communication bandwidth and the narrow view-angles of the camera, the operator cannot observe shadow regions and curved spaces frequently. To overcome this problem, a new form of virtual force is generated along the collision vector according to both distance and approaching velocity between an obstacle and the mobile robot, which is obtained from ultrasonic sensors. This virtual force is transferred back to the two-degrees-of-freedom master joystick over the Internet to enable a human operator to feel the geometrical relation between the mobile robot and the obstacle. It is demonstrated by experiments that this haptic reflection improves the performance of a tele-operated mobile robot significantly.

Multi-class support vector machines for paint condition assessment on the Sydney Harbour Bridge using hyperspectral imaging

  • Huynh, Cong Phuoc;Mustapha, Samir;Runcie, Peter;Porikli, Fatih
    • Structural Monitoring and Maintenance
    • /
    • 제2권3호
    • /
    • pp.181-197
    • /
    • 2015
  • Assessing the condition of paint on civil structures is an important but challenging and costly task, in particular when it comes to large and complex structures. Current practices of visual inspection are labour-intensive and time-consuming to perform. In addition, this task usually relies on the experience and subjective judgment of individual inspectors. In this study, hyperspectral imaging and classification techniques are proposed as a method to objectively assess the state of the paint on a civil or other structure. The ultimate objective of the work is to develop a technology that can provide precise and automatic grading of paint condition and assessment of degradation due to age or environmental factors. Towards this goal, we acquired hyperspectral images of steel surfaces located at long (mid-range) and short distances on the Sydney Harbour Bridge with an Acousto-Optics Tunable filter (AOTF) hyperspectral camera (consisting of 21 bands in the visible spectrum). We trained a multi-class Support Vector Machines (SVM) classifier to automatically assess the grading of the paint from hyperspectral signatures. Our results demonstrate that the classifier generates highly accurate assessment of the paint condition in comparison to the judgement of human experts.

A Mask Wearing Detection System Based on Deep Learning

  • Yang, Shilong;Xu, Huanhuan;Yang, Zi-Yuan;Wang, Changkun
    • Journal of Multimedia Information System
    • /
    • 제8권3호
    • /
    • pp.159-166
    • /
    • 2021
  • COVID-19 has dramatically changed people's daily life. Wearing masks is considered as a simple but effective way to defend the spread of the epidemic. Hence, a real-time and accurate mask wearing detection system is important. In this paper, a deep learning-based mask wearing detection system is developed to help people defend against the terrible epidemic. The system consists of three important functions, which are image detection, video detection and real-time detection. To keep a high detection rate, a deep learning-based method is adopted to detect masks. Unfortunately, according to the suddenness of the epidemic, the mask wearing dataset is scarce, so a mask wearing dataset is collected in this paper. Besides, to reduce the computational cost and runtime, a simple online and real-time tracking method is adopted to achieve video detection and monitoring. Furthermore, a function is implemented to call the camera to real-time achieve mask wearing detection. The sufficient results have shown that the developed system can perform well in the mask wearing detection task. The precision, recall, mAP and F1 can achieve 86.6%, 96.7%, 96.2% and 91.4%, respectively.

딥 러닝 기반의 영상처리 기법을 이용한 겹침 돼지 분리 (Separation of Occluding Pigs using Deep Learning-based Image Processing Techniques)

  • 이한해솔;사재원;신현준;정용화;박대희;김학재
    • 한국멀티미디어학회논문지
    • /
    • 제22권2호
    • /
    • pp.136-145
    • /
    • 2019
  • The crowded environment of a domestic pig farm is highly vulnerable to the spread of infectious diseases such as foot-and-mouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a camera. Although it is required to correctly separate occluding pigs for tracking each individual pigs, extracting the boundaries of the occluding pigs fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate occluding pigs not only by exploiting the characteristics (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the limitation (i.e., the bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with two-pigs occlusion patterns show that the proposed method can provide better accuracy and processing speed than one of the state-of-the-art widely used deep learning-based segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).

Vision-based garbage dumping action detection for real-world surveillance platform

  • Yun, Kimin;Kwon, Yongjin;Oh, Sungchan;Moon, Jinyoung;Park, Jongyoul
    • ETRI Journal
    • /
    • 제41권4호
    • /
    • pp.494-505
    • /
    • 2019
  • In this paper, we propose a new framework for detecting the unauthorized dumping of garbage in real-world surveillance camera. Although several action/behavior recognition methods have been investigated, these studies are hardly applicable to real-world scenarios because they are mainly focused on well-refined datasets. Because the dumping actions in the real-world take a variety of forms, building a new method to disclose the actions instead of exploiting previous approaches is a better strategy. We detected the dumping action by the change in relation between a person and the object being held by them. To find the person-held object of indefinite form, we used a background subtraction algorithm and human joint estimation. The person-held object was then tracked and the relation model between the joints and objects was built. Finally, the dumping action was detected through the voting-based decision module. In the experiments, we show the effectiveness of the proposed method by testing on real-world videos containing various dumping actions. In addition, the proposed framework is implemented in a real-time monitoring system through a fast online algorithm.