• Title/Summary/Keyword: multiple vision

Search Result 455, Processing Time 0.025 seconds

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors (레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구)

  • Jang, Sung-woo;Kang, Yeon-sik
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.22 no.8
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.11
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

Mobile Robot Localization using Ubiquitous Vision System (시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정)

  • Dao, Nguyen Xuan;Kim, Chi-Ho;You, Bum-Jae
    • Proceedings of the KIEE Conference
    • /
    • 2005.07d
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF

Development of Input Device for Positioning of Multiple DOFs (다자유도 위치설정을 위한 입력장치의 개발)

  • Kim, Dae-Sung;Kim, Jin-Oh
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.8
    • /
    • pp.851-858
    • /
    • 2009
  • In this study, we propose a new input device using vision technology for positioning of multiple DOFs. The input device is composed of multiple Tags on a transparent table and a vision camera below the table. Vision camera detects LEDs at the bottom of each Tag to derive information of the ID, position and orientation. The information are used to determine position and orientation of remote target DOFs. Our developed approach is very reliable and effective, especially when the corresponding DOFs are from many independent individuals. We show an application example with a SCARA robot to prove the flexibility and extendability.

Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision (다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용)

  • Lee, Yong-Ju;Park, Man-Woo
    • Journal of KIBIM
    • /
    • v.7 no.1
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • v.13 no.1
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

Real-time Water Quality Monitoring System Using Vision Camera and Multiple Objects Tracking Method (비젼 카메라와 다중 객체 추적 방법을 이용한 실시간 수질 감시 시스템)

  • Yang, Won-Keun;Lee, Jung-Ho;Cho, Ik-Hwan;Jin, Ju-Kyong;Jeong, Dong-Seok
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.4C
    • /
    • pp.401-410
    • /
    • 2007
  • In this paper, we propose water quality monitoring system using vision camera and multiple objects tracking method. The proposed system analyzes object individually using vision camera unlike monitoring system using sensor method. The system using vision camera consists of individual object segmentation part and objects tracking part based on interrelation between successive frames. For real-time processing, we make background image using non-parametric estimation and extract objects using background image. If we use non-parametric estimation, objects extraction method can reduce large amount of computation complexity, as well as extract objects more effectively. Multiple objects tracking method predicts next motion using moving direction, velocity and acceleration of individual object then carries out tracking based on the predicted motion. And we apply exception handling algorithms to improve tracking performance. From experiment results under various conditions, it shows that the proposed system can be available for real-time water quality monitoring system since it has very short processing time and correct multiple objects tracking.

A Study on the 3-dimensional feature measurement system for OMM using multiple-sensors (멀티센서 시스템을 이용한 3차원 형상의 기상측정에 관한 연구)

  • 권양훈;윤길상;조명우
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2002.10a
    • /
    • pp.158-163
    • /
    • 2002
  • This paper presents a multiple sensor system for rapid and high-precision coordinate data acquisition in the OMM (On-machine measurement) process. In this research, three sensors (touch probe, laser, and vision sensor) are integrated to obtain more accurate measuring results. The touch-type probe has high accuracy, but is time-consuming. Vision sensor can acquire many point data rapidly over a spatial range but its accuracy is less than other sensors. Also, it is not possible to acquire data for invisible areas. Laser sensor has medium accuracy and measuring speed among the sensors, and can acquire data for sharp or rounded edge and the features with very small holes and/or grooves. However, it has range- constraints to use because of its system structure. In this research, a new optimum sensor integration method for OMM is proposed by integrating the multiple-sensor to accomplish mote effective inspection planning. To verify the effectiveness of the proposed method, simulation and experimental works are performed, and the results are analyzed.

  • PDF