• Title/Summary/Keyword: Multi-vision sensors

Search Result 58, Processing Time 0.033 seconds

Crowd Activity Recognition using Optical Flow Orientation Distribution

  • Kim, Jinpyung;Jang, Gyujin;Kim, Gyujin;Kim, Moon-Hyun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.8
    • /
    • pp.2948-2963
    • /
    • 2015
  • In the field of computer vision, visual surveillance systems have recently become an important research topic. Growth in this area is being driven by both the increase in the availability of inexpensive computing devices and image sensors as well as the general inefficiency of manual surveillance and monitoring. In particular, the ultimate goal for many visual surveillance systems is to provide automatic activity recognition for events at a given site. A higher level of understanding of these activities requires certain lower-level computer vision tasks to be performed. So in this paper, we propose an intelligent activity recognition model that uses a structure learning method and a classification method. The structure learning method is provided as a K2-learning algorithm that generates Bayesian networks of causal relationships between sensors for a given activity. The statistical characteristics of the sensor values and the topological characteristics of the generated graphs are learned for each activity, and then a neural network is designed to classify the current activity according to the features extracted from the multiple sensor values that have been collected. Finally, the proposed method is implemented and tested by using PETS2013 benchmark data.

Automatic identification and analysis of multi-object cattle rumination based on computer vision

  • Yueming Wang;Tiantian Chen;Baoshan Li;Qi Li
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.519-534
    • /
    • 2023
  • Rumination in cattle is closely related to their health, which makes the automatic monitoring of rumination an important part of smart pasture operations. However, manual monitoring of cattle rumination is laborious and wearable sensors are often harmful to animals. Thus, we propose a computer vision-based method to automatically identify multi-object cattle rumination, and to calculate the rumination time and number of chews for each cow. The heads of the cattle in the video were initially tracked with a multi-object tracking algorithm, which combined the You Only Look Once (YOLO) algorithm with the kernelized correlation filter (KCF). Images of the head of each cow were saved at a fixed size, and numbered. Then, a rumination recognition algorithm was constructed with parameters obtained using the frame difference method, and rumination time and number of chews were calculated. The rumination recognition algorithm was used to analyze the head image of each cow to automatically detect multi-object cattle rumination. To verify the feasibility of this method, the algorithm was tested on multi-object cattle rumination videos, and the results were compared with the results produced by human observation. The experimental results showed that the average error in rumination time was 5.902% and the average error in the number of chews was 8.126%. The rumination identification and calculation of rumination information only need to be performed by computers automatically with no manual intervention. It could provide a new contactless rumination identification method for multi-cattle, which provided technical support for smart pasture.

Vision Sensor System for Abnormal Region Detection under Outdoor Environment (옥외 환경 하에서의 이상영역 검출을 위한 시각 감시 시스템의 구축)

  • Seo, Won-Chan
    • Journal of Sensor Science and Technology
    • /
    • v.9 no.1
    • /
    • pp.61-69
    • /
    • 2000
  • In this paper, an algorithm was developed to construct a vision sensor system that can detect abnormal region under ever changing outdoor environment. The algorithm was implemented on parallel network system consist of multi-processors according as it's properties to enlarge it's features. From experiments using real scenes, the algorithm was adaptive to ever changing outdoor environmental conditions and it was confirmed that the system is robust and effective.

  • PDF

Accurate Vehicle Positioning on a Numerical Map

  • Laneurit Jean;Chapuis Roland;Chausse Fr d ric
    • International Journal of Control, Automation, and Systems
    • /
    • v.3 no.1
    • /
    • pp.15-31
    • /
    • 2005
  • Nowadays, the road safety is an important research field. One of the principal research topics in this field is the vehicle localization in the road network. This article presents an approach of multi sensor fusion able to locate a vehicle with a decimeter precision. The different informations used in this method come from the following sensors: a low cost GPS, a numeric camera, an odometer and a steer angle sensor. Taking into account a complete model of errors on GPS data (bias on position and nonwhite errors) as well as the data provided by an original approach coupling a vision algorithm with a precise numerical map allow us to get this precision.

Activity Recognition based on Multi-modal Sensors using Dynamic Bayesian Networks (동적 베이지안 네트워크를 이용한 델티모달센서기반 사용자 행동인식)

  • Yang, Sung-Ihk;Hong, Jin-Hyuk;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.72-76
    • /
    • 2009
  • Recently, as the interest of ubiquitous computing has been increased there has been lots of research about recognizing human activities to provide services in this environment. Especially, in mobile environment, contrary to the conventional vision based recognition researches, lots of researches are sensor based recognition. In this paper we propose to recognize the user's activity with multi-modal sensors using hierarchical dynamic Bayesian networks. Dynamic Bayesian networks are trained by the OVR(One-Versus-Rest) strategy. The inferring part of this network uses less calculation cost by selecting the activity with the higher percentage of the result of a simpler Bayesian network. For the experiment, we used an accelerometer and a physiological sensor recognizing eight kinds of activities, and as a result of the experiment we gain 97.4% of accuracy recognizing the user's activity.

MultiView-Based Hand Posture Recognition Method Based on Point Cloud

  • Xu, Wenkai;Lee, Ick-Soo;Lee, Suk-Kwan;Lu, Bo;Lee, Eung-Joo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2585-2598
    • /
    • 2015
  • Hand posture recognition has played a very important role in Human Computer Interaction (HCI) and Computer Vision (CV) for many years. The challenge arises mainly due to self-occlusions caused by the limited view of the camera. In this paper, a robust hand posture recognition approach based on 3D point cloud from two RGB-D sensors (Kinect) is proposed to make maximum use of 3D information from depth map. Through noise reduction and registering two point sets obtained satisfactory from two views as we designed, a multi-viewed hand posture point cloud with most 3D information can be acquired. Moreover, we utilize the accurate reconstruction and classify each point cloud by directly matching the normalized point set with the templates of different classes from dataset, which can reduce the training time and calculation. Experimental results based on posture dataset captured by Kinect sensors (from digit 1 to 10) demonstrate the effectiveness of the proposed method.

Design and Realization of Stereo Vision Module For 3D Facial Expression Tracking (3차원 얼굴 표정 추적을 위한 스테레오 시각 모듈 설계 및 구현)

  • Lee, Mun-Hee;Kim, Kyong-Sok
    • Journal of Broadcast Engineering
    • /
    • v.11 no.4 s.33
    • /
    • pp.533-540
    • /
    • 2006
  • In this study we propose to use a facial motion capture technique to track facial motions and expressions effectively by using the stereo vision module, which has two CMOS IMAGE SENSORS. In the proposed tracking algorithm, a center point tracking technique and correlation tracking technique, based on neural networks, were used. Experimental results show that the two tracking techniques using stereo vision motion capture are able to track general face expressions at a 95.6% and 99.6% success rate, for 15 frames and 30 frames, respectively. However, the tracking success rates(82.7%,99.1%) of the center point tracking technique was far higher than those(78.7%,92.7%) of the correlation tracking technique, when lips trembled.

Active Peg-in-hole of Chamferless Parts Using Multi-sensors (다중센서를 사용한 챔퍼가 없는 부품의 능동적인 삽입작업)

  • Jeon, Hun-Jong;Kim, Kab-Il;Kim, Dae-Won;Son, Yu-Seck
    • Proceedings of the KIEE Conference
    • /
    • 1993.07a
    • /
    • pp.410-413
    • /
    • 1993
  • Chamferless peg-in-hole process of the cylindrical type parts using force/torque sensor and vision sensor is analyzed and simulated in this paper. Peg-in-hole process is classified to the normal mode (only position error) and tilted mode(position and orientation error). The tilted mode is sub-classified to the small and the big tilted mode according to the relative orientation error. Since the big tilted node happened very rare, most papers dealt with only the normal or the small tilted mode. But the most errors of the peg-in-hole process happened in the big tilted mode. This problem is analyzed and simulated in this paper using the force/torque sensor and vision senor. In the normal mode, fuzzy logic is introduced to combine the data of the force/torque sensor and vision sensor. Also the whole processing algorithms and simulations are presented.

  • PDF

Research of soccer robot system strategies

  • Sugisaka, Masanori;Kiyomatsu, Toshiro;Hara, Masayoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.92.4-92
    • /
    • 2002
  • In this paper, as an ideal test bed for studies on multi-agent system, the multiple micro robot soccer playing system is introduced at first. The construction of such experimental system has involved lots of kinds of challenges such as sensors fusing, robot designing, vision processing, motion controlling, and especially the cooperation planning of those robots. So in this paper we want to stress emphasis on how to evolve the system automatically based on the model of behavior-based learning in multi-agent domain. At first we present such model in common sense and then apply it to the realistic experimental system . At last we will give some results showing that the proposed approach is feasi...

  • PDF

Intelligent Abnormal Situation Event Detections for Smart Home Users Using Lidar, Vision, and Audio Sensors (스마트 홈 사용자를 위한 라이다, 영상, 오디오 센서를 이용한 인공지능 이상징후 탐지 알고리즘)

  • Kim, Da-hyeon;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.3
    • /
    • pp.17-26
    • /
    • 2021
  • Recently, COVID-19 has spread and time to stay at home has been increasing in accordance with quarantine guidelines of the government such as recommendations to refrain from going out. As a result, the number of single-person households staying at home is also increasingsingle-person households are less likely to be notified to the outside world in times of emergency than multi-person households. This study collects various situations occurring in the home with lidar, image, and voice sensors and analyzes the data according to the sensors through their respective algorithms. Using this method, we analyzed abnormal patterns such as emergency situations and conducted research to detect abnormal signs in humans. Artificial intelligence algorithms that detect abnormalities in people by each sensor were studied and the accuracy of anomaly detection was measured according to the sensor. Furthermore, this work proposes a fusion method that complements the pros and cons between sensors by experimenting with the detectability of sensors for various situations.