• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.033 seconds

Intelligent System based on Command Fusion and Fuzzy Logic Approaches - Application to mobile robot navigation (명령융합과 퍼지기반의 지능형 시스템-이동로봇주행적용)

  • Jin, Taeseok;Kim, Hyun-Deok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.5
    • /
    • pp.1034-1041
    • /
    • 2014
  • This paper propose a fuzzy inference model for obstacle avoidance for a mobile robot with an active camera, which is intelligently searching the goal location in unknown environments using command fusion, based on situational command using an vision sensor. Instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. In this paper, "command fusion" method is used to govern the robot motions. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. We describe experimental results obtained with the proposed method that demonstrate successful navigation using real vision data.

Unmanned Vehicle System Configuration using All Terrain Vehicle

  • Moon, Hee-Chang;Park, Eun-Young;Kim, Jung-Ha
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1550-1554
    • /
    • 2004
  • This paper deals with an unmanned vehicle system configuration using all terrain vehicle. Many research institutes and university study and develop unmanned vehicle system and control algorithm. Now a day, they try to apply unmanned vehicle to use military device and explore space and deep sea. These unmanned vehicles can help us to work is difficult task and approach. In the previous research of unmanned vehicle in our lab, we used 1/10 scale radio control vehicle and composed the unmanned vehicle system using ultrasonic sensors, CCD camera and kinds of sensor for vehicle's motion control. We designed lane detecting algorithm using vision system and obstacle detecting and avoidance algorithm using ultrasonic sensor and infrared ray sensor. As the system is increased, it is hard to compose the system on the 1/10 scale RC car. So we have to choose a new vehicle is bigger than 1/10 scale RC car but it is smaller than real size vehicle. ATV(all terrain vehicle) and real size vehicle have similar structure and its size is smaller. In this research, we make unmanned vehicle using ATV and explain control theory of each component

  • PDF

Road Slide Detection Algorithm Using CCD Camera (CCD 카메라를 이용한 도로 붕괴 사태 검출 알고리즘)

  • Kwon, Young-Man;Shin, Se-Yeon;Park, Young-Jin;Kim, Eun-Soo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.2
    • /
    • pp.181-187
    • /
    • 2011
  • In this paper, we proposed the vision-based efficient algorithm for road slide detection like as destruction of road cut slope. The proposed algorithm defines the image region as non surveillance and surveillance which is further divided by road, boundary and non road region. After that, it find the moving block, remember the history of movement using the TTL(Time To Live) table, determine the road slide by checking the existence of moving blocks from non road region to road region together. We confirmed the proposed algorithm detected the road slide effectively through experiments.

Development of Vehicle Sealing Inspection System Using Geometry Matching Method (형상 매칭법을 이용한 비이클 실링 검사 시스템 개발)

  • Lee, Jung-Ho;Park, Chan-Hee;Seo, Young-Soo;Lee, Hyung-Soo;Kim, Han-Joo
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.22 no.1
    • /
    • pp.150-155
    • /
    • 2013
  • This work present a new method of sealing inspection system for vehicle in which foam rubber materials are used for sealing the vehicle parts. This system is composed from a devices comprising non-contact and real-time scanning on visual inspection in machine parts. We have been investigated qualitative factors that influenced on sealing system of vehicle structure which flexibly attenuated vibration and plenty of foam rubber materials having elastic property. However, there are different factors which still depended on outdated technique (personnel subjective judgment) in the performance inspection of rubber parts, specially for cross section inspection. Through a newly developed inspection system which recently applied for the production line, we successfully achieved more effective results of matching rate by about 80 % in the sealing performance inspection with 0.7% to 1.4% in the repeated errors. These are resulted from non-contacted response by CCD camera and vision program using geometry matching method. We expect that this system may be widely applied in the strict inspection parts of more diverse cross-section in future.

A Task Scheduling Strategy in a Multi-core Processor for Visual Object Tracking Systems (시각물체 추적 시스템을 위한 멀티코어 프로세서 기반 태스크 스케줄링 방법)

  • Lee, Minchae;Jang, Chulhoon;Sunwoo, Myoungho
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.24 no.2
    • /
    • pp.127-136
    • /
    • 2016
  • The camera based object detection systems should satisfy the recognition performance as well as real-time constraints. Particularly, in safety-critical systems such as Autonomous Emergency Braking (AEB), the real-time constraints significantly affects the system performance. Recently, multi-core processors and system-on-chip technologies are widely used to accelerate the object detection algorithm by distributing computational loads. However, due to the advanced hardware, the complexity of system architecture is increased even though additional hardwares improve the real-time performance. The increased complexity also cause difficulty in migration of existing algorithms and development of new algorithms. In this paper, to improve real-time performance and design complexity, a task scheduling strategy is proposed for visual object tracking systems. The real-time performance of the vision algorithm is increased by applying pipelining to task scheduling in a multi-core processor. Finally, the proposed task scheduling algorithm is applied to crosswalk detection and tracking system to prove the effectiveness of the proposed strategy.

Strawberry Harvesting Robot for Bench-type Cultivation

  • Han, Kil-Su;Kim, Si-Chan;Lee, Young-Bum;Kim, Sang-Chul;Im, Dong-Hyuk;Choi, Hong-Ki;Hwang, Heon
    • Journal of Biosystems Engineering
    • /
    • v.37 no.1
    • /
    • pp.65-74
    • /
    • 2012
  • Purpose: An autonomous robot was developed for harvesting strawberries cultivated in bench-type systems. Methods: The harvest robot consisted of four main components: an autonomous vehicle, a manipulator with four degrees of freedom (DOF), an end effector with two DOFs, and a color computer vision system. Strawberry detection was performed based on 3D image and distance information obtained from a stereo CCD color camera and a laser device, respectively. Results: In this work, a Cartesian type manipulator system was designed, including an intermediate revolute axis and a double driven arm-based joint axis, so that it could generate collision-free motions during harvesting. A DC servomotor-driven end-effector, consisting of a gripper and a cutter, was designed for gripping and cutting the strawberry stem without damaging the strawberry itself. Real-time position tracking algorithms were developed to detect, recognize, trace, and approach strawberries under natural light conditions. Conclusion: The developed robot system could harvest a strawberry within 7 seconds without damage.

Gesture Recognition by Analyzing a Trajetory on Spatio-Temporal Space (시공간상의 궤적 분석에 의한 제스쳐 인식)

  • 민병우;윤호섭;소정;에지마 도시야끼
    • Journal of KIISE:Software and Applications
    • /
    • v.26 no.1
    • /
    • pp.157-157
    • /
    • 1999
  • Researches on the gesture recognition have become a very interesting topic in the computer vision area, Gesture recognition from visual images has a number of potential applicationssuch as HCI (Human Computer Interaction), VR(Virtual Reality), machine vision. To overcome thetechnical barriers in visual processing, conventional approaches have employed cumbersome devicessuch as datagloves or color marked gloves. In this research, we capture gesture images without usingexternal devices and generate a gesture trajectery composed of point-tokens. The trajectory Is spottedusing phase-based velocity constraints and recognized using the discrete left-right HMM. Inputvectors to the HMM are obtained by using the LBG clustering algorithm on a polar-coordinate spacewhere point-tokens on the Cartesian space .are converted. A gesture vocabulary is composed oftwenty-two dynamic hand gestures for editing drawing elements. In our experiment, one hundred dataper gesture are collected from twenty persons, Fifty data are used for training and another fifty datafor recognition experiment. The recognition result shows about 95% recognition rate and also thepossibility that these results can be applied to several potential systems operated by gestures. Thedeveloped system is running in real time for editing basic graphic primitives in the hardwareenvironments of a Pentium-pro (200 MHz), a Matrox Meteor graphic board and a CCD camera, anda Window95 and Visual C++ software environment.

A study on development of automatic welding system for corrugated membranes of the LNG tank (LNG 탱크의 주름진 내벽박판용 자동용접시스템의 개발에 관한 연구)

  • 유제용;유원상;나석주;강계형;한용섭
    • Journal of Welding and Joining
    • /
    • v.14 no.1
    • /
    • pp.99-106
    • /
    • 1996
  • Development of an automatic TIG welding system incorporating a vision sensor and torch control mechanism leads to an improved welding quality and greater production efficiency. The automatic welding system should be greatly restricted in its size and weight for the LNG(Liquefied Natural Gas) storage tank and also provide a unique torch rotating mechanism which keeps the torch tip in the constant position while the angle is changed continuously to maintain the welding torch substantially perpendicular to the weld line. The developed system is driven by two translation axes X, Z and one rotational axis. A moving line window method is adopted to the image recognition of the corrugated membranes with specular reflection. This method decides original laser stripe patterns in image which is affected by multi-reflection. A self-teaching algorithm, which guides the automatic welding machine with the information provided by the CCD camera without any previous learning of a reference trajectory, was developed for tracking the corrugated membrane of the LNG tank along the weld line.

  • PDF

Constructing 3D Outlines of Objects based on Feature Points using Monocular Camera (단일카메라를 사용한 특징점 기반 물체 3차원 윤곽선 구성)

  • Park, Sang-Heon;Lee, Jeong-Oog;Baik, Doo-Kwon
    • The KIPS Transactions:PartB
    • /
    • v.17B no.6
    • /
    • pp.429-436
    • /
    • 2010
  • This paper presents a method to extract 3D outlines of objects in an image obtained from a monocular vision. After detecting the general outlines of the object by MOPS(Multi-Scale Oriented Patches) -algorithm and we obtain their spatial coordinates. Simultaneously, it obtains the space-coordinates with feature points to be immanent within the outlines of objects through SIFT(Scale Invariant Feature Transform)-algorithm. It grasps a form of objects to join the space-coordinates of outlines and SIFT feature points. The method which is proposed in this paper, it forms general outlines of objects, so that it enables a rapid calculation, and also it has the advantage capable of collecting a detailed data because it supplies the internal-data of outlines through SIFT feature points.

Point Pattern Matching Based Global Localization using Ceiling Vision (천장 조명을 이용한 점 패턴 매칭 기반의 광역적인 위치 추정)

  • Kang, Min-Tae;Sung, Chang-Hun;Roh, Hyun-Chul;Chung, Myung-Jin
    • Proceedings of the KIEE Conference
    • /
    • 2011.07a
    • /
    • pp.1934-1935
    • /
    • 2011
  • In order for a service robot to perform several tasks, basically autonomous navigation technique such as localization, mapping, and path planning is required. The localization (estimation robot's pose) is fundamental ability for service robot to navigate autonomously. In this paper, we propose a new system for point pattern matching based visual global localization using spot lightings in ceiling. The proposed algorithm us suitable for system that demands high accuracy and fast update rate such a guide robot in the exhibition. A single camera looking upward direction (called ceiling vision system) is mounted on the head of the mobile robot and image features such as lightings are detected and tracked through the image sequence. For detecting more spot lightings, we choose wide FOV lens, and inevitably there is serious image distortion. But by applying correction calculation only for the position of spot lightings not whole image pixels, we can decrease the processing time. And then using point pattern matching and least square estimation, finally we can get the precise position and orientation of the mobile robot. Experimental results demonstrate the accuracy and update rate of the proposed algorithm in real environments.

  • PDF