• 제목/요약/키워드: vision camera

Search Result 1,376, Processing Time 0.026 seconds

Attitude and Position Estimation of a Helmet Using Stereo Vision (스테레오 영상을 이용한 헬멧의 자세 및 위치 추정)

  • Shin, Ok-Shik;Heo, Se-Jong;Park, Chan-Gook
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.7
    • /
    • pp.693-701
    • /
    • 2010
  • In this paper, it is proposed that an attitude and position estimation algorithm based on a stereo camera system for a helmet tracker. Stereo camera system consists of two CCD camera, a helmet, infrared LEDs and a frame grabber. Fifteen infrared LEDs are feature points which are used to determine the attitude and position of the helmet. These features are arranged in triangle pattern with different distance on the helmet. Vision-based the attitude and position algorithm consists of feature segmentation, projective reconstruction, model indexing and attitude estimation. In this paper, the attitude estimation algorithm using UQ (Unit Quaternion) is proposed. The UQ guarantee that the rotation matrix is a unitary matrix. The performance of presented algorithm is verified by simulation and experiment.

Lateral Control of Vision-Based Autonomous Vehicle using Neural Network (신형회로망을 이용한 비젼기반 자율주행차량의 횡방향제어)

  • 김영주;이경백;김영배
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.687-690
    • /
    • 2000
  • Lately, many studies have been progressed for the protection human's lives and property as holding in check accidents happened by human's carelessness or mistakes. One part of these is the development of an autonomouse vehicle. General control method of vision-based autonomous vehicle system is to determine the navigation direction by analyzing lane images from a camera, and to navigate using proper control algorithm. In this paper, characteristic points are abstracted from lane images using lane recognition algorithm with sobel operator. And then the vehicle is controlled using two proposed auto-steering algorithms. Two steering control algorithms are introduced in this paper. First method is to use the geometric relation of a camera. After transforming from an image coordinate to a vehicle coordinate, a steering angle is calculated using Ackermann angle. Second one is using a neural network algorithm. It doesn't need to use the geometric relation of a camera and is easy to apply a steering algorithm. In addition, It is a nearest algorithm for the driving style of human driver. Proposed controller is a multilayer neural network using Levenberg-Marquardt backpropagation learning algorithm which was estimated much better than other methods, i.e. Conjugate Gradient or Gradient Decent ones.

  • PDF

Vision-Based Indoor Object Tracking Using Mean-Shift Algorithm (평균 이동 알고리즘을 이용한 영상기반 실내 물체 추적)

  • Kim Jong-Hun;Cho Kyeum-Rae;Lee Dae-Woo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.12 no.8
    • /
    • pp.746-751
    • /
    • 2006
  • In this paper, we present tracking algorithm for the indoor moving object. We research passive method using a camera and image processing. It had been researched to use dynamic based estimators, such as Kalman Filter, Extended Kalman Filter and Particle Filter for tracking moving object. These algorithm have a good performance on real-time tracking, but they have a limit. If the shape of object is changed or object is located on complex background, they will fail to track them. This problem will need the complicated image processing algorithm. Finally, a large algorithm is made from integration of dynamic based estimator and image processing algorithm. For eliminating this inefficiency problem, image based estimator, Mean-shift Algorithm is suggested. This algorithm is implemented by color histogram. In other words, it decide coordinate of object's center from using probability density of histogram in image. Although shape is changed, this is not disturbed by complex background and can track object. This paper shows the results in real camera system, and decides 3D coordinate using the data from mean-shift algorithm and relationship of real frame and camera frame.

On low cost model-based monitoring of industrial robotic arms using standard machine vision

  • Karagiannidisa, Aris;Vosniakos, George C.
    • Advances in robotics research
    • /
    • v.1 no.1
    • /
    • pp.81-99
    • /
    • 2014
  • This paper contributes towards the development of a computer vision system for telemonitoring of industrial articulated robotic arms. The system aims to provide precision real time measurements of the joint angles by employing low cost cameras and visual markers on the body of the robot. To achieve this, a mathematical model that connects image features and joint angles was developed covering rotation of a single joint whose axis is parallel to the visual projection plane. The feature that is examined during image processing is the varying area of given circular target placed on the body of the robot, as registered by the camera during rotation of the arm. In order to distinguish between rotation directions four targets were used placed every $90^{\circ}$ and observed by two cameras at suitable angular distances. The results were deemed acceptable considering camera cost and lighting conditions of the workspace. A computational error analysis explored how deviations from the ideal camera positions affect the measurements and led to appropriate correction. The method is deemed to be extensible to multiple joint motion of a known kinematic chain.

Vision-based support in the characterization of superelastic U-shaped SMA elements

  • Casciati, F.;Casciati, S.;Colnaghi, A.;Faravelli, L.;Rosadini, L.;Zhu, S.
    • Smart Structures and Systems
    • /
    • v.24 no.5
    • /
    • pp.641-648
    • /
    • 2019
  • The authors investigate the feasibility of applying a vision-based displacement-measurement technique in the characterization of a SMA damper recently introduced in the literature. The experimental campaign tests a steel frame on a uni-axial shaking table driven by sinusoidal signals in the frequency range from 1Hz to 5Hz. Three different cameras are used to collect the images, namely an industrial camera and two commercial smartphones. The achieved results are compared. The camera showing the better performance is then used to test the same frame after its base isolation. U-shaped, shape-memory-alloy (SMA) elements are installed as dampers at the isolation level. The accelerations of the shaking table and those of the frame basement are measured by accelerometers. A system of markers is glued on these system components, as well as along the U-shaped elements serving as dampers. The different phases of the test are discussed, in the attempt to obtain as much possible information on the behavior of the SMA elements. Several tests were carried out until the thinner U-shaped element went to failure.

A Study on Detection of Lane and Situation of Obstacle for AGV using Vision System (비전 시스템을 이용한 AGV의 차선인식 및 장애물 위치 검출에 관한 연구)

  • 이진우;이영진;이권순
    • Journal of Korean Port Research
    • /
    • v.14 no.3
    • /
    • pp.303-312
    • /
    • 2000
  • In this paper, we describe an image processing algorithm which is able to recognize the road lane. This algorithm performs to recognize the interrelation between AGV and the other vehicle. We experimented on AGV driving test with color CCD camera which is setup on the top of vehicle and acquires the digital signal. This paper is composed of two parts. One is image preprocessing part to measure the condition of the condition of the lane and vehicle. This finds the information of lines using RGB ratio cutting algorithm, the edge detection and Hough transform. The other obtains the situation of other vehicles using the image processing and viewport. At first, 2 dimension image information derived from vision sensor is interpreted to the 3 dimension information by the angle and position of the CCD camera. Through these processes, if vehicle knows the driving conditions which are lane angle, distance error and real position of other vehicles, we should calculate the reference steering angle.

  • PDF

A Machine Vision Algorithm for Measuring the Diameter of Eggcrate Grid (에그크레이트(Eggcrate) 격자(Grid)의 내접원 직경 측정을 위한 머신비편 알고리즘)

  • Kim, Chae-Soo;Park, Kwang-Soo;Kim, Woo-Sung;Hwang, Hark;Lee, Moon-Kyu
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.4
    • /
    • pp.85-96
    • /
    • 2000
  • An Eggcrate assembly is an important part to hold and support 16,000 tubes containing hot and contaminated water in the steam generator of nuclear power plant. As a great number of tubes should be inserted into the eggcrate assembly, the dimensions of each eggcrate grid are one of the critical factors to determine the availability of tube insertion. in this paper. we propose a machine vision algorithm for measuring the inner-circle diameter of each eggcrate grid whose shape is not exact quadrangular. The overall procedure of the algorithm is composed of camera calibration, eggcrate image preprocessing, grid height adjustment, and inner-circle diameter estimation. The algorithm is tested on real specimens and the results show that the algorithm works fairly well.

  • PDF

A Study on Visual Servoing Image Information for Stabilization of Line-of-Sight of Unmanned Helicopter (무인헬기의 시선안정화를 위한 시각제어용 영상정보에 관한 연구)

  • 신준영;이현정;이민철
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2004.10a
    • /
    • pp.600-603
    • /
    • 2004
  • UAV (Unmanned Aerial Vehicle) is an aerial vehicle that can accomplish the mission without pilot. UAV was developed for a military purpose such as a reconnaissance in an early stage. Nowadays usage of UAV expands into a various field of civil industry such as a drawing a map, broadcasting, observation of environment. These UAV, need vision system to offer accurate information to person who manages on ground and to control the UAV itself. Especially LOS(Line-of-Sight) system wants to precisely control direction of system which wants to tracking object using vision sensor like an CCD camera, so it is very important in vision system. In this paper, we propose a method to recognize object from image which is acquired from camera mounted on gimbals and offer information of displacement between center of monitor and center of object.

  • PDF

Design of a Color Machine Vision System for the Automatic Sorting of Soybeans (대두의 자동 선별을 위한 컬러 기계시각장치의 설계)

  • Kim, Tae-Ho;Mun, Chang-Su;Park, Su-U;Jeong, Won-Gyo;Do, Yong-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.231-234
    • /
    • 2003
  • This paper describes the structure, operation, image processing, and decision making techniques of a color machine vision system designed for the automatic sorting of soybeans. The system consists of feeder, conveyor belt, line-scan camera, lights. ejector, and a PC Unlike manufactured goods, agricultural products including soybeans have quite uneven features. The criteria for sorting good and bad beans also vary depending on inspectors. We tackle these problem by letting the system learn the inspecting parameters from good samples selected manually by a machine user before running the system for sorting. Real-time processing has another importance In the design. Four parallel DSPs are employed to increase the processing speed. When the designed system was tested with real soybeans and the result was successful.

  • PDF

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF