• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.035 seconds

Lane Detection for Adaptive Control of Autonomous Vehicle (지능형 자동차의 적응형 제어를 위한 차선인식)

  • Kim, Hyeon-Koo;Ju, Yeonghwan;Lee, Jonghun;Park, Yongwan;Jeong, Ho-Yeol
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.4 no.4
    • /
    • pp.180-189
    • /
    • 2009
  • Currently, most automobile companies are interested in research on intelligent autonomous vehicle. They are mainly focused on driver's intelligent assistant and driver replacement. In order to develop an autonomous vehicle, lateral and longitudinal control is necessary. This paper presents a lateral and longitudinal control system for autonomous vehicle that has only mono-vision camera. For lane detection, we present a new lane detection algorithm using clothoid parabolic road model. The proposed algorithm in compared with three other methods such as virtual line method, gradient method and hough transform method, in terms of lane detection ratio. For adaptive control, we apply a vanishing point estimation to fuzzy control. In order to improve handling and stability of the vehicle, the modeling errors between steering angle and predicted vanishing point are controlled to be minimized. So, we established a fuzzy rule of membership functions of inputs (vanishing point and differential vanishing point) and output (steering angle). For simulation, we developed 1/8 size robot (equipped with mono-vision system) of the actual vehicle and tested it in the athletics track of 400 meter. Through the test, we prove that our proposed method outperforms 98 % in terms of detection rate in normal condition. Compared with virtual line method, gradient method and hough transform method, our method also has good performance in the case of clear, fog and rain weather.

  • PDF

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

Development of Real-Time Image Processing Algorithm on the Positions of Multi-Object in an Image Plane (한 이미지 평면에서 다물체 위치의 실시간 화상처리 알고리즘 개발)

  • Jang, W.S.;Kim, K.S.;Lee, S.M.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.22 no.5
    • /
    • pp.523-531
    • /
    • 2002
  • This study is concentrated on the development of high speed multi-object image processing algorithm in real time. Recently, the use of vision system is rapidly increasing in inspection and robot's position control. To apply the vision system, it is necessary to transform the physical coordinate of object into the image information acquired by CCD camera. Thus, to use the application of the vision system to the inspection and robot's position control in real time, we have to know the position of object in the image plane. Particularly, in case of rigid body using multi-cue to identify its shape, the each position of multi-cue must be calculated in an image plane at the same time. To solve these problems, the image processing algorithm on the position of multi-cue is developed.

Tele-operating System of Field Robot for Cultivation Management - Vision based Tele-operating System of Robotic Smart Farming for Fruit Harvesting and Cultivation Management

  • Ryuh, Youngsun;Noh, Kwang Mo;Park, Joon Gul
    • Journal of Biosystems Engineering
    • /
    • v.39 no.2
    • /
    • pp.134-141
    • /
    • 2014
  • Purposes: This study was to validate the Robotic Smart Work System that can provides better working conditions and high productivity in unstructured environments like bio-industry, based on a tele-operation system for fruit harvesting with low cost 3-D positioning system on the laboratory level. Methods: For the Robotic Smart Work System for fruit harvesting and cultivation management in agriculture, a vision based tele-operating system and 3-D position information are key elements. This study proposed Robotic Smart Farming, an agricultural version of Robotic Smart Work System, and validated a 3-D position information system with a low cost omni camera and a laser marker system in the lab environment in order to get a vision based tele-operating system and 3-D position information. Results: The tasks like harvesting of the fixed target and cultivation management were accomplished even if there was a short time delay (30 ms ~ 100 ms). Although automatic conveyor works requiring accurate timing and positioning yield high productivity, the tele-operation with user's intuition will be more efficient in unstructured environments which require target selection and judgment. Conclusions: This system increased work efficiency and stability by considering ancillary intelligence as well as user's experience and knowhow. In addition, senior and female workers will operate the system easily because it can reduce labor and minimized user fatigue.

Design of the Vision Based Head Tracker Using Area of Artificial Mark (인공표식의 면적을 이용하는 영상 기반 헤드 트랙커 설계)

  • 김종훈;이대우;조겸래
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.34 no.7
    • /
    • pp.63-70
    • /
    • 2006
  • This paper describes research of using area of artificial mark on vision based head tracker system. A head tracker system consists of the translational and rotational motions which are detected by web camera. Results of the motion are taken from image processing and neural network. Because of the characteristics of cockpit, the specific color on the helmet is tracked for translational motion. And rotational motion is tracked via neural network. Ratio of two different colored area on the helmet is used as input of network. Neural network algorithms used, such as back-propagation and RBFN (Radial Basis Function Network). Both back-propagation using a characteristic of feedback and RBFN using a characteristic of statistics have a good performances for the tracking of nonlinear system such as a head motion. Finally, this paper analyzes and compares with tracking performance.

Machine Vision Applications in Automated Scrap-separating Research (머신비젼 시스템을 이용(利用)한 스크랩 자동선별(自動選別) 연구(硏究))

  • Kim, Chan-Wook;Kim, Hang-Goo
    • Resources Recycling
    • /
    • v.15 no.6 s.74
    • /
    • pp.3-9
    • /
    • 2006
  • In this study, a machine vision system using a color recognition method has been designed and developed to automatically sort out specified materials from a mixture, especially Cu and other non-ferrous metal scraps from a mixture of iron scraps. The system consists of a CCD camera, light sources, a frame grabber, conveying devices and an air-nozzle ejector, and is program-controlled by a image processing algorithms. The ejectors designed to be operated by an I/O interface communication with a hardware controller. In the functional tests of the system, its efficiency in the separation of Cu scraps from its mixture with Fe ones reaches to 90% or more at a conveying speed of 15m/min, and thus the system is proven to be excellent in terms of the separating efficiency. Therefore, it is expected that the system can be commercialized in the industry of shredder makers if an automated sorting system of high speed is realized.

Image Recognition Using Colored-hear Transformation Based On Human Synesthesia (인간의 공감각에 기반을 둔 색청변환을 이용한 영상 인식)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Pyo, Seong-Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.135-141
    • /
    • 2008
  • In this paper, we propose colored-hear recognition that distinguishing feature of synesthesia for human sensing by shared vision and specific sense of hearing. We perceived what potential influence of human's structured object recognition by visual analysis through the camera, So we've studied how to make blind persons can feel similar vision of real object. First of all, object boundaries are detected in the image data representing a specific scene. Then, four specific features such as object location in the image focus, feeling of average color, distance information of each object, and object area are extracted from picture. Finally, mapping these features to the audition factors. The audition factors are used to recognize vision for blind persons. Proposed colored-hear transformation for recognition can get fast and detail perception, and can be transmit information for sense at the same time. Thus, we were get a food result when applied this concepts to blind person's case of image recognition.

  • PDF

A vision-based system for dynamic displacement measurement of long-span bridges: algorithm and verification

  • Ye, X.W.;Ni, Y.Q.;Wai, T.T.;Wong, K.Y.;Zhang, X.M.;Xu, F.
    • Smart Structures and Systems
    • /
    • v.12 no.3_4
    • /
    • pp.363-379
    • /
    • 2013
  • Dynamic displacement of structures is an important index for in-service structural condition and behavior assessment, but accurate measurement of structural displacement for large-scale civil structures such as long-span bridges still remains as a challenging task. In this paper, a vision-based dynamic displacement measurement system with the use of digital image processing technology is developed, which is featured by its distinctive characteristics in non-contact, long-distance, and high-precision structural displacement measurement. The hardware of this system is mainly composed of a high-resolution industrial CCD (charge-coupled-device) digital camera and an extended-range zoom lens. Through continuously tracing and identifying a target on the structure, the structural displacement is derived through cross-correlation analysis between the predefined pattern and the captured digital images with the aid of a pattern matching algorithm. To validate the developed system, MTS tests of sinusoidal motions under different vibration frequencies and amplitudes and shaking table tests with different excitations (the El-Centro earthquake wave and a sinusoidal motion) are carried out. Additionally, in-situ verification experiments are performed to measure the mid-span vertical displacement of the suspension Tsing Ma Bridge in the operational condition and the cable-stayed Stonecutters Bridge during loading tests. The obtained results show that the developed system exhibits an excellent capability in real-time measurement of structural displacement and can serve as a good complement to the traditional sensors.

An Vision System for Traffic sign Recognition (교통표지판 인식을 위한 비젼시스템)

  • Kim, Tae-Woo;Kang, Yong-Seok;Cha, Sam;Bae, Cheol-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.2 no.2
    • /
    • pp.45-50
    • /
    • 2009
  • This paper presents an active vision system for on-line traffic sign recognition. The system is composed of two cameras, one is equipped with a wide-angle lens and the other with a telephoto lends, and a PC with an image processing board. The system first detects candidates for traffic signs in the wide-angle image using color, intensity, and shape information. For each candidate, the telephoto-camera is directed to its predicted position to capture the candidate in a large size in the image. The recognition algorithm is designed by intensively using built in functions of an off-the-shelf image processing board to realize both easy implementation and fast recognition. The results of on-road experiments show the feasibility of the system.

  • PDF

Computer Vision-based Method of detecting a Approaching Vehicle or the Safety of a Bus Passenger Getting off (버스 승객의 안전한 하차를 위한 컴퓨터비전 기반의 차량 탐지 시스템 개발)

  • Lee Kwang-Soon;Lee Kyung-Bok;Rho Kwang-Hyun;Han Min-Hong
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.6 no.1
    • /
    • pp.1-7
    • /
    • 2005
  • This paper describes the system for detecting vehicles in the rear and rear-side that access between sidewalk and bus stopped to city road at day by computer vision-based method. This system informs appearance of vehicles to bus driver and passenger for the safety of a bus passenger getting off. The camera mounted on the top portion of the bus exit door gets the rear and rear-side image of the bus whenever a bus stops at the stop. The system sets search area between bus and sidewalk from this image and detects a vehicle by using change of image and sobel filtering in this area. From a central point of the vehicle detected, we can find out the distance, speed and direction by its location, width and length. It alarms the driver and passengers when it's judged that dangerous situation for the passenger getting off happens. This experiment results in a detection rate more than 87% in driving by bus on the road.

  • PDF