• 제목/요약/키워드: camera monitoring

검색결과 745건 처리시간 0.027초

Dense RGB-D Map-Based Human Tracking and Activity Recognition using Skin Joints Features and Self-Organizing Map

  • Farooq, Adnan;Jalal, Ahmad;Kamal, Shaharyar
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권5호
    • /
    • pp.1856-1869
    • /
    • 2015
  • This paper addresses the issues of 3D human activity detection, tracking and recognition from RGB-D video sequences using a feature structured framework. During human tracking and activity recognition, initially, dense depth images are captured using depth camera. In order to track human silhouettes, we considered spatial/temporal continuity, constraints of human motion information and compute centroids of each activity based on chain coding mechanism and centroids point extraction. In body skin joints features, we estimate human body skin color to identify human body parts (i.e., head, hands, and feet) likely to extract joint points information. These joints points are further processed as feature extraction process including distance position features and centroid distance features. Lastly, self-organized maps are used to recognize different activities. Experimental results demonstrate that the proposed method is reliable and efficient in recognizing human poses at different realistic scenes. The proposed system should be applicable to different consumer application systems such as healthcare system, video surveillance system and indoor monitoring systems which track and recognize different activities of multiple users.

Feature Based Techniques for a Driver's Distraction Detection using Supervised Learning Algorithms based on Fixed Monocular Video Camera

  • Ali, Syed Farooq;Hassan, Malik Tahir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권8호
    • /
    • pp.3820-3841
    • /
    • 2018
  • Most of the accidents occur due to drowsiness while driving, avoiding road signs and due to driver's distraction. Driver's distraction depends on various factors which include talking with passengers while driving, mood disorder, nervousness, anger, over-excitement, anxiety, loud music, illness, fatigue and different driver's head rotations due to change in yaw, pitch and roll angle. The contribution of this paper is two-fold. Firstly, a data set is generated for conducting different experiments on driver's distraction. Secondly, novel approaches are presented that use features based on facial points; especially the features computed using motion vectors and interpolation to detect a special type of driver's distraction, i.e., driver's head rotation due to change in yaw angle. These facial points are detected by Active Shape Model (ASM) and Boosted Regression with Markov Networks (BoRMaN). Various types of classifiers are trained and tested on different frames to decide about a driver's distraction. These approaches are also scale invariant. The results show that the approach that uses the novel ideas of motion vectors and interpolation outperforms other approaches in detection of driver's head rotation. We are able to achieve a percentage accuracy of 98.45 using Neural Network.

ERGONOMIC ANALYSIS OF A TELEMANIPULATION TECHNIQUE FOR A PYROPROCESS DEMONSTRATION FACILITY

  • Yu, Seungnam;Lee, Jongkwang;Park, Byungsuk;Kim, Kiho;Cho, Ilje
    • Nuclear Engineering and Technology
    • /
    • 제46권4호
    • /
    • pp.489-500
    • /
    • 2014
  • In this study, remote handling strategies for a large-scale argon cell facility were considered. The suggested strategies were evaluated by several types of field test. The teleoperation tasks were performed using a developed remote handling system, which enabled traveling over entire cell area using a bridge transport system. Each arm of the system had six DOFs (degrees of freedom), and the bridge transport system had four DOFs. However, despite the dexterous manipulators and redundant monitoring system, many operators, including professionals, experienced difficulties in operating the remote handling system. This was because of the lack of a strategy for handling the installed camera system, and the difficulty in recognizing the gripper pose, which might fall outside the FOV (field of vision) of the system during teleoperation. Hence, in this paper, several considerations for the remote handling tasks performed in the target facility were discussed, and the tasks were analyzed based on ergonomic factors such as the workload. Toward the development of a successful operation strategy, several ergonomic issues, such as active/passive view of the remote handling system, eye/hand alignment, and FOV were considered. Furthermore, using the method for classifying remote handling tasks, several unit tasks were defined and evaluated.

구조물의 6자유도 변위 측정을 위한 비주얼 서보잉 기반 양립형 구조 광 로봇 시스템 (Visual Servoing-Based Paired Structured Light Robot System for Estimation of 6-DOF Structural Displacement)

  • 전해민;방유석;김한근;명현
    • 제어로봇시스템학회논문지
    • /
    • 제17권10호
    • /
    • pp.989-994
    • /
    • 2011
  • This study aims to demonstrate the feasibility of a visual servoing-based paired structured light (SL) robot for estimating structural displacement under various external loads. The former paired SL robot, which was proposed in the previous study, was composed of two screens facing with each other, each with one or two lasers and a camera. It was found that the paired SL robot could estimate the translational and rotational displacement each in 3-DOF with high accuracy and low cost. However, the measurable range is fairly limited due to the limited screen size. In this paper, therefore, a visual servoing-based 2-DOF manipulator which controls the pose of lasers is introduced. By controlling the positions of the projected laser points to be on the screen, the proposed robot can estimate the displacement regardless of the screen size. We performed various simulations and experimental tests to verify the performance of the newly proposed robot. The results show that the proposed system overcomes the range limitation of the former system and it can be utilized to accurately estimate the structural displacement.

교통영상에서의 규칙에 기반한 차량영역 검출기법 (Rule-based Detection of Vehicles in Traffic Scenes)

  • 박영태
    • 대한전자공학회논문지SP
    • /
    • 제37권3호
    • /
    • pp.31-40
    • /
    • 2000
  • 영상정보에 기반한 교통제어시스템의 핵심요소인 교통영상에서의 차량의 위치, 대수를 측정하는 견실한 기법을 제시하였다. 제안한 기법은, 배경영상을 제거한 차 영상으로부터 국부 최적 임계값 산출기법에 의해 차량의 밝고 어두운 증거영역을 추출하고 차량의 기하학적 특정을 이용해 3개의 규칙으로 합병하는 증거추론 (Evidential reasoning)에 기반을 두었다 국부 최적 임계값 산출기법은 차량형상이 중첩되었거나 차량의 색상이 배경영상과 유사할 경우에도 치량의 밝고 어두운 증거영역의 분리를 보장한다 다양한 교통영상에 적용한 결과 카메라의 거리, 위치, 날씨 등의 동작 환경의 변화에 매우 견실한 검지 성능을 가점을 확인하였고 프레임사이의 움직임 정보를 사용하지 않았으므로 차량의 흐름이 정체되었을 경우에도 적용이 가능하다.

  • PDF

Regularized Multichannel Blind Deconvolution Using Alternating Minimization

  • James, Soniya;Maik, Vivek;Karibassappa, K.;Paik, Joonki
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권6호
    • /
    • pp.413-421
    • /
    • 2015
  • Regularized Blind Deconvolution is a problem applicable in degraded images in order to bring the original image out of blur. Multichannel blind Deconvolution considered as an optimization problem. Each step in the optimization is considered as variable splitting problem using an algorithm called Alternating Minimization Algorithm. Each Step in the Variable splitting undergoes Augmented Lagrangian method (ALM) / Bregman Iterative method. Regularization is used where an ill posed problem converted into a well posed problem. Two well known regularizers are Tikhonov class and Total Variation (TV) / L2 model. TV can be isotropic and anisotropic, where isotropic for L2 norm and anisotropic for L1 norm. Based on many probabilistic model and Fourier Transforms Image deblurring can be solved. Here in this paper to improve the performance, we have used an adaptive regularization filtering and isotropic TV model Lp norm. Image deblurring is applicable in the areas such as medical image sensing, astrophotography, traffic signal monitoring, remote sensors, case investigation and even images that are taken using a digital camera / mobile cameras.

AVM 정지선인지기반 도심환경 종방향 측위보정 알고리즘 (AVM Stop-line Detection based Longitudinal Position Correction Algorithm for Automated Driving on Urban Roads)

  • 김종호;이현성;유진수;이경수
    • 자동차안전학회지
    • /
    • 제12권2호
    • /
    • pp.33-39
    • /
    • 2020
  • This paper presents an Around View Monitoring (AVM) stop-line detection based longitudinal position correction algorithm for automated driving on urban roads. Poor positioning accuracy of low-cost GPS has many problems for precise path tracking. Therefore, this study aims to improve the longitudinal positioning accuracy of low-cost GPS. The algorithm has three main processes. The first process is a stop-line detection. In this process, the stop-line is detected using Hough Transform from the AVM camera. The second process is a map matching. In the map matching process, to find the corrected vehicle position, the detected line is matched to the stop-line of the HD map using the Iterative Closest Point (ICP) method. Third, longitudinal position of low-cost GPS is updated using a corrected vehicle position with Kalman Filter. The proposed algorithm is implemented in the Robot Operating System (ROS) environment and verified on the actual urban road driving data. Compared to low-cost GPS only, Test results show the longitudinal localization performance was improved.

Post-outburst observation of HBC722 in Pelican nebula

  • 양윤아;박원기;성현일;이상각;윤태석;이정은;강원석;박근홍;조동환
    • 천문학회보
    • /
    • 제38권1호
    • /
    • pp.59.1-59.1
    • /
    • 2013
  • HBC722 (also known as LkHa 188-G4 and PTF 10qpf; A. Miller et al., 2011) is one of the FU Orionis-like young stellar objects which outbursted in August 2010 (Semkov et al., 2010). We have been monitoring the post-outburst phase of this object since November 2010 with Korean Astronomy and Space Science Institute Near Infrared Camera System (KASINICS), at Bohyunsan Optical Astronomy Observatory (BOAO). Four filters, J, H, Ks, and H2 band, were used for this observation. We did aperture photometry to find photometric variation. The light curve shows a long period brightness change. After decrease of the brightness, which was reported at the KAS 2011 fall meeting, HBC722 brightens up slowly now. However we cannot confirm any short period variations, previously reported by Green et al (2013), due to large scatters in the obtained light curve.

  • PDF

감시 대상의 위치 추정을 통한 감시 시스템의 에너지 효율적 운영 방법 (An Energy-Efficient Operating Scheme of Surveillance System by Predicting the Location of Targets)

  • 이가욱;이수빈;이호원;조동호
    • 한국통신학회논문지
    • /
    • 제38C권2호
    • /
    • pp.172-180
    • /
    • 2013
  • 본 논문에서는 DSRC(Dedicated Short Range Communication)를 이용한 감시 대상의 검출을 통해 동작하는 카메라 기반의 감시 시스템 환경에서 저장 공간과 운영 전력 등 소비되는 자원을 절약하면서도 더 높은 사건 보고율을 달성할 수 있는 에너지 효율적 감시 카메라 운영 방법을 제시한다. 제안하는 감시 카메라 운영 방법은 입/출(入/出) 특성을 포함한 도로 환경과 카메라의 시야각을 추상화한 모델 정보와 감시 대상에 부착된 DSRC 단말로부터 수집되는 차량의 속도 벡터 정보를 고려하여 해당 감시 대상을 완벽하게 촬영하기 위해 사용되어야 할 카메라의 개수를 계산하고, 순차적으로 작동/종료함으로써 감시 시스템에서 사용되는 자원을 절약한다. 또한, 기존 감시 시스템의 운영 방식과 제안 방식의 성능 비교를 위한 모의실험을 수행하여 감시 시스템 운영비용의 절감 효과를 보였다.

비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어 (Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation)

  • 진태석;이장명
    • 제어로봇시스템학회논문지
    • /
    • 제11권4호
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.