• 제목/요약/키워드: Vision-based Control

검색결과 683건 처리시간 0.024초

Vision Sensor-Based Driving Algorithm for Indoor Automatic Guided Vehicles

  • Quan, Nguyen Van;Eum, Hyuk-Min;Lee, Jeisung;Hyun, Chang-Ho
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제13권2호
    • /
    • pp.140-146
    • /
    • 2013
  • In this paper, we describe a vision sensor-based driving algorithm for indoor automatic guided vehicles (AGVs) that facilitates a path tracking task using two mono cameras for navigation. One camera is mounted on vehicle to observe the environment and to detect markers in front of the vehicle. The other camera is attached so the view is perpendicular to the floor, which compensates for the distance between the wheels and markers. The angle and distance from the center of the two wheels to the center of marker are also obtained using these two cameras. We propose five movement patterns for AGVs to guarantee smooth performance during path tracking: starting, moving straight, pre-turning, left/right turning, and stopping. This driving algorithm based on two vision sensors gives greater flexibility to AGVs, including easy layout change, autonomy, and even economy. The algorithm was validated in an experiment using a two-wheeled mobile robot.

Stereo Vision Based Balancing System Results

  • Tserendondog, Tengis;Amar, Batmunkh;Ragchaa, Byambajav
    • International Journal of Internet, Broadcasting and Communication
    • /
    • 제8권1호
    • /
    • pp.1-6
    • /
    • 2016
  • Keeping a system in stable state is one of the important issues of control theory. The main goal of our basic research is stability of unmanned aerial vehicle (quadrotor). This type of system uses a variety of sensors to stabilize. In control theory and automatic control system to stabilize any system it is need to apply feedback control based on information from sensors. Our aim is to provide balance based on the 3D spatial information in real time. We used PID control method for stabilization of a seesaw balancing system and the article presents our experimental results. This paper presents the possibility of balancing of seesaw system based on feedback information from stereo vision system only.

시각을 이용한 이동 로봇의 강건한 경로선 추종 주행 (Vision-Based Mobile Robot Navigation by Robust Path Line Tracking)

  • 손민혁;도용태
    • 센서학회지
    • /
    • 제20권3호
    • /
    • pp.178-186
    • /
    • 2011
  • Line tracking is a well defined method of mobile robot navigation. It is simple in concept, technically easy to implement, and already employed in many industrial sites. Among several different line tracking methods, magnetic sensing is widely used in practice. In comparison, vision-based tracking is less popular due mainly to its sensitivity to surrounding conditions such as brightness and floor characteristics although vision is the most powerful robotic sensing capability. In this paper, a vision-based robust path line detection technique is proposed for the navigation of a mobile robot assuming uncontrollable surrounding conditions. The technique proposed has four processing steps; color space transformation, pixel-level line sensing, block-level line sensing, and robot navigation control. This technique effectively uses hue and saturation color values in the line sensing so to be insensitive to the brightness variation. Line finding in block-level makes not only the technique immune from the error of line pixel detection but also the robot control easy. The proposed technique was tested with a real mobile robot and proved its effectiveness.

구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식 (Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment)

  • 김동훈;이동화;명현;최현택
    • 제어로봇시스템학회논문지
    • /
    • 제19권8호
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발 (Development of a Vision-based Blank Alignment Unit for Press Automation Process)

  • 오종규;김대식;김수종
    • 제어로봇시스템학회논문지
    • /
    • 제21권1호
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구 (A Study on Adaptive Control to Fill Weld Groove by Using Multi-Torches in SAW)

  • 문형순;정문영;배강열
    • Journal of Welding and Joining
    • /
    • 제17권6호
    • /
    • pp.90-99
    • /
    • 1999
  • Significant portion of the total manufacturing time for a pipe fabrication process is spent on the welding following primary machining and fit-up processes. To achieve a reliable weld bead appearance, automatic seam tracking and adaptive control to fill the groove are urgently needed. For the seam tracking in welding processes, the vision sensors have been successfully applied. However, the adaptive filling control of the multi-torches system for the appropriate welded area has not been implemented in the area of SAW(submerged arc welding) by now. The term adaptive control is often used to describe recent advances in welding process control by strictly this only applies to a system which is able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed various types of methodologies for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using vision sensor. It was shown that the algorithm for welding current/voltage combination and welding speed revealed sound weld bead appearance compared with that of voltage/current combination.

  • PDF

2개의 비전 센서 및 딥 러닝을 이용한 도로 속도 표지판 인식, 자동차 조향 및 속도제어 방법론 (The Road Speed Sign Board Recognition, Steering Angle and Speed Control Methodology based on Double Vision Sensors and Deep Learning)

  • 김인성;서진우;하대완;고윤석
    • 한국전자통신학회논문지
    • /
    • 제16권4호
    • /
    • pp.699-708
    • /
    • 2021
  • 본 논문에서는 2개의 비전 센서와 딥 러닝을 이용한 자율주행 차량의 속도제어 알고리즘을 제시하였다. 비전 센서 A로부터 제공되는 도로 속도 표지판 영상에 딥 러닝 프로그램인 텐서플로우를 이용하여 속도 표지를 인식한 후, 자동차가 인식된 속도를 따르도록 하는 자동차 속도 제어 알고리즘을 제시하였다. 동시에 비전 센서 B부터 전송되는 도로 영상을 실시간으로 분석하여 차선을 검출하고 조향 각을 계산하며 PWM 제어를 통해 전륜 차축을 제어, 차량이 차선을 추적하도록 하는 조향 각 제어 알고리즘을 개발하였다. 제안된 조향 각 및 속도 제어 알고리즘의 유효성을 검증하기 위해서 파이썬 언어, 라즈베리 파이 및 Open CV를 기반으로 하는 자동차 시작품을 제작하였다. 또한, 시험 제작한 트랙에서 조향 및 속도 제어에 관한 시나리오를 검증함으로써 정확성을 확인할 수 있었다.

Controlling robot by image-based visual servoing with stereo cameras

  • Fan, Jun-Min;Won, Sang-Chul
    • 한국정보기술응용학회:학술대회논문집
    • /
    • 한국정보기술응용학회 2005년도 6th 2005 International Conference on Computers, Communications and System
    • /
    • pp.229-232
    • /
    • 2005
  • In this paper, an image-based "approach-align -grasp" visual servo control design is proposed for the problem of object grasping, which is based on the binocular stand-alone system. The basic idea consists of considering a vision system as a specific sensor dedicated a task and included in a control servo loop, and we perform automatic grasping follows the classical approach of splitting the task into preparation and execution stages. During the execution stage, once the image-based control modeling is established, the control task can be performed automatically. The proposed visual servoing control scheme ensures the convergence of the image-features to desired trajectories by using the Jacobian matrix, which is proved by the Lyapunov stability theory. And we also stress the importance of projective invariant object/gripper alignment. The alignment between two solids in 3-D projective space can be represented with view-invariant, more precisely; it can be easily mapped into an image set-point without any knowledge about the camera parameters. The main feature of this method is that the accuracy associated with the task to be performed is not affected by discrepancies between the Euclidean setups at preparation and at task execution stages. Then according to the projective alignment, the set point can be computed. The robot gripper will move to the desired position with the image-based control law. In this paper we adopt a constant Jacobian online. Such method describe herein integrate vision system, robotics and automatic control to achieve its goal, it overcomes disadvantages of discrepancies between the different Euclidean setups and proposes control law in binocular-stand vision case. The experimental simulation shows that such image-based approach is effective in performing the precise alignment between the robot end-effector and the object.

  • PDF

ACC/AEBS 시스템용 센서퓨전을 통한 주행경로 추정 알고리즘 (Development of the Driving path Estimation Algorithm for Adaptive Cruise Control System and Advanced Emergency Braking System Using Multi-sensor Fusion)

  • 이동우;이경수;이재완
    • 자동차안전학회지
    • /
    • 제3권2호
    • /
    • pp.28-33
    • /
    • 2011
  • This paper presents driving path estimation algorithm for adaptive cruise control system and advanced emergency braking system using multi-sensor fusion. Through data collection, yaw rate filtering based road curvature and vision sensor road curvature characteristics are analyzed. Yaw rate filtering based road curvature and vision sensor road curvature are fused into the one curvature by weighting factor which are considering characteristics of each curvature data. The proposed driving path estimation algorithm has been investigated via simulation performed on a vehicle package Carsim and Matlab/Simulink. It has been shown via simulation that the proposed driving path estimation algorithm improves primary target detection rate.

Corridor Navigation of the Mobile Robot Using Image Based Control

  • Han, Kyu-Bum;Kim, Hae-Young;Baek, Yoon-Su
    • Journal of Mechanical Science and Technology
    • /
    • 제15권8호
    • /
    • pp.1097-1107
    • /
    • 2001
  • In this paper, the wall following navigation algorithm of the mobile robot using a mono vision system is described. The key points of the mobile robot navigation system are effective acquisition of the environmental information and fast recognition of the robot position. Also, from this information, the mobile robot should be appropriately controlled to follow a desired path. For the recognition of the relative position and orientation of the robot to the wall, the features of the corridor structure are extracted using the mono vision system, then the relative position, the offset distance and steering angle of the robot from the wall, is derived for a simple corridor geometry. For the alleviation of the computation burden of the image processing, the Kalman filter is used to reduce search region in the image space for line detection. Next, the robot is controlled by this information to follow the desired path. The wall following control scheme by the PD control scheme is composed of two control parts, the approaching control and the orientation control, and each control is performed by steering and forward-driving motion of the robot. To verify the effectiveness of the proposed algorithm, the real time navigation experiments are performed. Through the result of the experiments, the effectiveness and flexibility of the suggested algorithm are verified in comparison with a pure encoder-guided mobile robot navigation system.

  • PDF