• Title/Summary/Keyword: Multi-vision sensors

Search Result 58, Processing Time 0.025 seconds

The development of a micro robot system for robot soccer game (로봇 축구 대회를 위한 마이크로 로봇 시스템의 개발)

  • 이수호;김경훈;김주곤;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.507-510
    • /
    • 1996
  • In this paper we present the multi-agent robot system developed for participating in micro robot soccer tournament. The multi-agent robot system consists of micro robot, a vision system, a host computer and a communication module. Mcro robot are equipped with two mini DC motors with encoders and gearboxes, a R/F receiver, a CPU and infrared sensors for obstacle detection. A vision system is used to recognize the position of the ball and opponent robots, position and orientation of our robots. The vision system is composed of a color CCD camera and a vision processing unit. Host computer is a Pentium PC, and it receives information from the vision system, generates commands for each robot using a robot management algorithm and transmits commands to the robots by the R/F communication module. And in order to achieve a given mission in micro robot soccer game, cooperative behaviors by robots are essential. Cooperative work between individual agents is achieved by the command of host computer.

  • PDF

A Vision System for ]Robot Soccer Game (로봇 축구 대회를 위한 영상 처리 시스템)

  • 고국원;최재호;김창효;김경훈;김주곤;이수호;조형석
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.11a
    • /
    • pp.434-438
    • /
    • 1996
  • In this paper we present the multi-agent robot system and the vision system developed for participating in micro robot soccer tournament. The multi-agent robot system consists of micro robot, a vision system, a host computer and a communication module. Micro robot are equipped with two mini DC motors witf encoders and gearboxes, a R/F receiver, a CPU and infrared sensors for obstacle detection. A vision system is used to recognize the position of the ball and opponent robots, position and orientation of our robots. The vision system is composed of a color CCD camera and a vision processing unit(AISI vision computer). The vision algorithm is based on morphological method. And it takes about 90 msec to detect ball and 3-our robots and 3-opponent robots with reasonable accuracy

  • PDF

Appearance Based Object Identification for Mobile Robot Localization in Intelligent Space with Distributed Vision Sensors

  • Jin, TaeSeok;Morioka, Kazuyuki;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.2
    • /
    • pp.165-171
    • /
    • 2004
  • Robots will be able to coexist with humans and support humans effectively in near future. One of the most important aspects in the development of human-friendly robots is to cooperation between humans and robots. In this paper, we proposed a method for multi-object identification in order to achieve such human-centered system and robot localization in intelligent space. The intelligent space is the space where many intelligent devices, such as computers and sensors, are distributed. The Intelligent Space achieves the human centered services by accelerating the physical and psychological interaction between humans and intelligent devices. As an intelligent device of the Intelligent Space, a color CCD camera module, which includes processing and networking part, has been chosen. The Intelligent Space requires functions of identifying and tracking the multiple objects to realize appropriate services to users under the multi-camera environments. In order to achieve seamless tracking and location estimation many camera modules are distributed. They causes some errors about object identification among different camera modules. This paper describes appearance based object representation for the distributed vision system in Intelligent Space to achieve consistent labeling of all objects. Then, we discuss how to learn the object color appearance model and how to achieve the multi-object tracking under occlusions.

The Automated Measurement of Tool Wear using Computer Vision (컴퓨터 비젼에 의한 공구마모의 자동계측)

  • Song, Jun-Yeop;Lee, Jae-Jong;Park, Hwa-Yeong
    • 한국기계연구소 소보
    • /
    • s.19
    • /
    • pp.69-79
    • /
    • 1989
  • Cutting tool life monitoring is a critical element needed for designing unmanned machining systems. This paper describes a tool wear measurement system using computer vision which repeatedly measures flank and crater wear of a single point cutting tool. This direct tool wear measurement method is based on an interactive procedure utilizing a image processor and multi-vision sensors. A measurement software calcultes 7 parameters to characterize flank and crater wear. Performance test revealed that the computer vision technique provides precise, absolute tool-wear quantification and reduces human maesurement errors.

  • PDF

Multi-Object Tracking using the Color-Based Particle Filter in ISpace with Distributed Sensor Network

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.1
    • /
    • pp.46-51
    • /
    • 2005
  • Intelligent Space(ISpace) is the space where many intelligent devices, such as computers and sensors, are distributed. According to the cooperation of many intelligent devices, the environment, it is very important that the system knows the location information to offer the useful services. In order to achieve these goals, we present a method for representing, tracking and human following by fusing distributed multiple vision systems in ISpace, with application to pedestrian tracking in a crowd. And the article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguity conditions. We propose to track the moving objects by generating hypotheses not in the image plan but on the top-view reconstruction of the scene. Comparative results on real video sequences show the advantage of our method for multi-object tracking. Simulations are carried out to evaluate the proposed performance. Also, the method is applied to the intelligent environment and its performance is verified by the experiments.

INS/Multi-Vision Integrated Navigation System Based on Landmark (다수의 비전 센서와 INS를 활용한 랜드마크 기반의 통합 항법시스템)

  • Kim, Jong-Myeong;Leeghim, Henzeh
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.45 no.8
    • /
    • pp.671-677
    • /
    • 2017
  • A new INS/Vision integrated navigation system by using multi-vision sensors is addressed in this paper. When the total number of landmark measured by the vision sensor is smaller than the allowable number, there is possibility that the navigation filter can diverge. To prevent this problem, multi-vision concept is applied to expend the field of view so that reliable number of landmarks are always guaranteed. In this work, the orientation of camera installed are 0, 120, and -120degree with respect to the body frame to improve the observability. Finally, the proposed technique is verified by using numerical simulation.

THE DEVELOPMENT OF THE NARROW GAP MULTI-PASS WELDING SYSTEM USING LASER VISION SYSTEM

  • Park, Hee-Chang;Park, Young-Jo;Song, Keun-Ho;Lee, Jae-Woong;Jung, Yung-Hwa;Luc Didier
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.706-713
    • /
    • 2002
  • In the multi-pass welding of pressure vessels or ships, the mechanical touch sensor system is generally used together with a manipulator to measure the gap and depth of the narrow gap to perform seam tracking. Unfortunately, such mechanical touch sensors may commit measuring errors caused by the eterioration of the measuring device. An automation system of narrow gap multi-pass welding using a laser vision system which can track the seam line of narrow gap and which can control welding power has been developed. The joint profile of the narrow gap, with 250mm depth and 28mm width, can be captured by laser vision camera. The image is then processed for defining tracking positions of the torch during welding. Then, the real-time correction of lateral and vertical position of the torch can be done by the laser vision system. The adaptive control of welding conditions like welding Currents and welding speeds, can also be performed by the laser vision system, which cannot be done by conventional mechanical touch systems. The developed automation system will be adopted to reduce the idle time of welders, which happens frequently in conventional long welding processes, and to improve the reliability of the weld quality as well.

  • PDF

The Development of the Narrow Gap Multi-Pass Welding System Using Laser Vision System

  • Park, H.C.;Park, Y.J.;Song, K.H.;Lee, J.W.;Jung, Y.H.;Didier, L.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.1
    • /
    • pp.45-51
    • /
    • 2002
  • In the multi-pass welding of pressure vessels or ships, the mechanical touch sensor system is generally used together with a manipulator to measure the gap and depth of the narrow gap to perform seam tracking. Unfortunately, such mechanical touch sensors may commit measuring errors caused by the deterioration of the measuring device. An automation system of narrow gap multi-pass welding using a laser vision system which can track the seam line of narrow gap and which can control welding power has been developed. The joint profile of the narrow gap, with 250mm depth and 28mm width, can be captured by laser vision camera. The image is then processed for defining tracking positions of the torch during welding. Then, the real-time correction of lateral and vertical position of the torch can be done by the laser vision system. The adaptive control of welding conditions like welding currents and welding speeds, can also be performed by the laser vision system, which cannot be done by conventional mechanical touch systems. The developed automation system will be adopted to reduce the idle time of welders, which happens frequently in conventional long welding processes, and to improve the reliability of the weld quality as well.

  • PDF

Real-Time Objects Tracking using Color Configuration in Intelligent Space with Distributed Multi-Vision (분산다중센서로 구현된 지능화공간의 색상정보를 이용한 실시간 물체추적)

  • Jin, Tae-Seok;Lee, Jang-Myung;Hashimoto, Hideki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.9
    • /
    • pp.843-849
    • /
    • 2006
  • Intelligent Space defines an environment where many intelligent devices, such as computers and sensors, are distributed. As a result of the cooperation between smart devices, intelligence emerges from the environment. In such scheme, a crucial task is to obtain the global location of every device in order to of for the useful services. Some tracking systems often prepare the models of the objects in advance. It is difficult to adopt this model-based solution as the tracking system when many kinds of objects exist. In this paper the location is achieved with no prior model, using color properties as information source. Feature vectors of multiple objects using color histogram and tracking method are described. The proposed method is applied to the intelligent environment and its performance is verified by the experiments.

A Study on Adaptive Control to Fill Weld Groove by Using Multi-Torches in SAW (SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구)

  • 문형순;정문영;배강열
    • Journal of Welding and Joining
    • /
    • v.17 no.6
    • /
    • pp.90-99
    • /
    • 1999
  • Significant portion of the total manufacturing time for a pipe fabrication process is spent on the welding following primary machining and fit-up processes. To achieve a reliable weld bead appearance, automatic seam tracking and adaptive control to fill the groove are urgently needed. For the seam tracking in welding processes, the vision sensors have been successfully applied. However, the adaptive filling control of the multi-torches system for the appropriate welded area has not been implemented in the area of SAW(submerged arc welding) by now. The term adaptive control is often used to describe recent advances in welding process control by strictly this only applies to a system which is able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed various types of methodologies for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using vision sensor. It was shown that the algorithm for welding current/voltage combination and welding speed revealed sound weld bead appearance compared with that of voltage/current combination.

  • PDF