• 제목/요약/키워드: multiple vision

검색결과 457건 처리시간 0.107초

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • 센서학회지
    • /
    • 제30권2호
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2005년도 ICCAS
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

레이더와 비전 센서를 이용하여 선행차량의 횡방향 운동상태를 보정하기 위한 IMM-PDAF 기반 센서융합 기법 연구 (A Study on IMM-PDAF based Sensor Fusion Method for Compensating Lateral Errors of Detected Vehicles Using Radar and Vision Sensors)

  • 장성우;강연식
    • 제어로봇시스템학회논문지
    • /
    • 제22권8호
    • /
    • pp.633-642
    • /
    • 2016
  • It is important for advanced active safety systems and autonomous driving cars to get the accurate estimates of the nearby vehicles in order to increase their safety and performance. This paper proposes a sensor fusion method for radar and vision sensors to accurately estimate the state of the preceding vehicles. In particular, we performed a study on compensating for the lateral state error on automotive radar sensors by using a vision sensor. The proposed method is based on the Interactive Multiple Model(IMM) algorithm, which stochastically integrates the multiple Kalman Filters with the multiple models depending on lateral-compensation mode and radar-single sensor mode. In addition, a Probabilistic Data Association Filter(PDAF) is utilized as a data association method to improve the reliability of the estimates under a cluttered radar environment. A two-step correction method is used in the Kalman filter, which efficiently associates both the radar and vision measurements into single state estimates. Finally, the proposed method is validated through off-line simulations using measurements obtained from a field test in an actual road environment.

A Parallel Implementation of Multiple Non-overlapping Cameras for Robot Pose Estimation

  • Ragab, Mohammad Ehab;Elkabbany, Ghada Farouk
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권11호
    • /
    • pp.4103-4117
    • /
    • 2014
  • Image processing and computer vision algorithms are gaining larger concern in a variety of application areas such as robotics and man-machine interaction. Vision allows the development of flexible, intelligent, and less intrusive approaches than most of the other sensor systems. In this work, we determine the location and orientation of a mobile robot which is crucial for performing its tasks. In order to be able to operate in real time there is a need to speed up different vision routines. Therefore, we present and evaluate a method for introducing parallelism into the multiple non-overlapping camera pose estimation algorithm proposed in [1]. In this algorithm the problem has been solved in real time using multiple non-overlapping cameras and the Extended Kalman Filter (EKF). Four cameras arranged in two back-to-back pairs are put on the platform of a moving robot. An important benefit of using multiple cameras for robot pose estimation is the capability of resolving vision uncertainties such as the bas-relief ambiguity. The proposed method is based on algorithmic skeletons for low, medium and high levels of parallelization. The analysis shows that the use of a multiprocessor system enhances the system performance by about 87%. In addition, the proposed design is scalable, which is necaccery in this application where the number of features changes repeatedly.

시각기반 센서 네트워크를 이용한 이동로봇의 위치 추정 (Mobile Robot Localization using Ubiquitous Vision System)

  • 누엔수안다오;김치호;유범재
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2005년도 제36회 하계학술대회 논문집 D
    • /
    • pp.2780-2782
    • /
    • 2005
  • In this paper, we present a mobile robot localization solution by using a Ubiquitous Vision System (UVS). The collective information gathered by multiple cameras that are strategically placed has many advantages. For example, aggregation of information from multiple viewpoints reduces the uncertainty about the robots' positions. We construct UVS as a multi-agent system by regarding each vision sensor as one vision agent (VA). Each VA performs target segmentation by color and motion information as well as visual tracking for multiple objects. Our modified identified contractnet (ICN) protocol is used for communication between VAs to coordinate multitask. This protocol raises scalability and modularity of thesystem because of independent number of VAs and needless calibration. Furthermore, the handover between VAs by using ICN is seamless. Experimental results show the robustness of the solution with respect to a widespread area. The performance in indoor environments shows the feasibility of the proposed solution in real-time.

  • PDF

다자유도 위치설정을 위한 입력장치의 개발 (Development of Input Device for Positioning of Multiple DOFs)

  • 김대성;김진오
    • 제어로봇시스템학회논문지
    • /
    • 제15권8호
    • /
    • pp.851-858
    • /
    • 2009
  • In this study, we propose a new input device using vision technology for positioning of multiple DOFs. The input device is composed of multiple Tags on a transparent table and a vision camera below the table. Vision camera detects LEDs at the bottom of each Tag to derive information of the ID, position and orientation. The information are used to determine position and orientation of remote target DOFs. Our developed approach is very reliable and effective, especially when the corresponding DOFs are from many independent individuals. We show an application example with a SCARA robot to prove the flexibility and extendability.

다수의 건설인력 위치 추적을 위한 스테레오 비전의 활용 (Simultaneous Tracking of Multiple Construction Workers Using Stereo-Vision)

  • 이용주;박만우
    • 한국BIM학회 논문집
    • /
    • 제7권1호
    • /
    • pp.45-53
    • /
    • 2017
  • Continuous research efforts have been made on acquiring location data on construction sites. As a result, GPS and RFID are increasingly employed on the site to track the location of equipment and materials. However, these systems are based on radio frequency technologies which require attaching tags on every target entity. Implementing the systems incurs time and costs for attaching/detaching/managing the tags or sensors. For this reason, efforts are currently being made to track construction entities using only cameras. Vision-based 3D tracking has been presented in a previous research work in which the location of construction manpower, vehicle, and materials were successfully tracked. However, the proposed system is still in its infancy and yet to be implemented on practical applications for two reasons. First, it does not involve entity matching across two views, and thus cannot be used for tracking multiple entities, simultaneously. Second, the use of a checker board in the camera calibration process entails a focus-related problem when the baseline is long and the target entities are located far from the cameras. This paper proposes a vision-based method to track multiple workers simultaneously. An entity matching procedure is added to acquire the matching pairs of the same entities across two views which is necessary for tracking multiple entities. Also, the proposed method simplified the calibration process by avoiding the use of a checkerboard, making it more adequate to the realistic deployment on construction sites.

Calibration of Structured Light Vision System using Multiple Vertical Planes

  • Ha, Jong Eun
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권1호
    • /
    • pp.438-444
    • /
    • 2018
  • Structured light vision system has been widely used in 3D surface profiling. Usually, it is composed of a camera and a laser which projects a line on the target. Calibration is necessary to acquire 3D information using structured light stripe vision system. Conventional calibration algorithms have found the pose of the camera and the equation of the stripe plane of the laser under the same coordinate system of the camera. Therefore, the 3D reconstruction is only possible under the camera frame. In most cases, this is sufficient to fulfill given tasks. However, they require multiple images which are acquired under different poses for calibration. In this paper, we propose a calibration algorithm that could work by using just one shot. Also, proposed algorithm could give 3D reconstruction under both the camera and laser frame. This would be done by using newly designed calibration structure which has multiple vertical planes on the ground plane. The ability to have 3D reconstruction under both the camera and laser frame would give more flexibility for its applications. Also, proposed algorithm gives an improvement in the accuracy of 3D reconstruction.

비젼 카메라와 다중 객체 추적 방법을 이용한 실시간 수질 감시 시스템 (Real-time Water Quality Monitoring System Using Vision Camera and Multiple Objects Tracking Method)

  • 양원근;이정호;조익환;진주경;정동석
    • 한국통신학회논문지
    • /
    • 제32권4C호
    • /
    • pp.401-410
    • /
    • 2007
  • 본 논문에서는 비젼 카메라와 다중 객체 추적 방법을 이용한 실시간 수질 감시 시스템을 제안하였다. 제안된 시스템은 기존의 센서 방식의 감시 시스템과 달리 비젼 카메라를 이용해 객체를 개별적으로 분석한다. 비젼 카메라를 이용한 시스템은 영상에서 개별 객체를 분리해 내는 방법과, 연속하는 두 프레임간의 상관관계에 의해서 다수의 객체를 추적하는 방법으로 구성된다. 실시간 처리를 위해 비모수 예측을 사용하여 배경 영상을 생성하고 이를 이용해 객체를 추출한다. 비모수 예측을 이용하면 연산량을 줄이는 동시에 비교적 정확하게 객체를 추출 할 수 있다. 다중 객체 추적 방법은 개별 객체가 움직이는 방향, 속도 및 가속도를 이용해 다음 움직임을 예측하고 이를 기반으로 추적을 수행하였다. 또한 추적 성공률을 향상시키기 위해 예외처리 알고리즘을 적용하였다. 다양한 환경에서 실험한 결과 제안한 시스템은 처리 시간이 짧고 정확하게 다중 객체를 추적할 수 있어 실시간 수질 감시 시스템에 사용이 가능함을 확인하였다.

멀티센서 시스템을 이용한 3차원 형상의 기상측정에 관한 연구 (A Study on the 3-dimensional feature measurement system for OMM using multiple-sensors)

  • 권양훈;윤길상;조명우
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2002년도 추계학술대회 논문집
    • /
    • pp.158-163
    • /
    • 2002
  • This paper presents a multiple sensor system for rapid and high-precision coordinate data acquisition in the OMM (On-machine measurement) process. In this research, three sensors (touch probe, laser, and vision sensor) are integrated to obtain more accurate measuring results. The touch-type probe has high accuracy, but is time-consuming. Vision sensor can acquire many point data rapidly over a spatial range but its accuracy is less than other sensors. Also, it is not possible to acquire data for invisible areas. Laser sensor has medium accuracy and measuring speed among the sensors, and can acquire data for sharp or rounded edge and the features with very small holes and/or grooves. However, it has range- constraints to use because of its system structure. In this research, a new optimum sensor integration method for OMM is proposed by integrating the multiple-sensor to accomplish mote effective inspection planning. To verify the effectiveness of the proposed method, simulation and experimental works are performed, and the results are analyzed.

  • PDF