• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.029 seconds

Position Control of an Object Using Vision Sensor (비전 센서를 이용한 물체의 위치 제어)

  • Ha, Eun-Hyeon;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.2
    • /
    • pp.49-56
    • /
    • 2011
  • In recent years, owing to the development of the image processing technology, the research to build control system using a vision sensor is stimulated. However, the time delay must be considered, because it works of time to get the result of an image processing in the system. It can be seen as an obstacle factor to real-time control. In this paper, using the pattern matching technique, the location of two objects is recognized from one image which was acquired by a camera. And it is implemented to a position control system as feedback data. Also, a possibility was shown to overcome a problem of time delay using PID controller. A number of experiments were done to show the validity of this study.

Selection and Allocation of Point Data with Wavelet Transform in Reverse Engineering (역공학에서 웨이브렛 변황을 이용한 점 데이터의 선택과 할당)

  • Ko, Tae-Jo;Kim, Hee-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.9
    • /
    • pp.158-165
    • /
    • 2000
  • Reverse engineering is reproducing products by directly extracting geometric information from physical objects such as clay model wooden mock-up etc. The fundamental work in the reverse engineering is to acquire the geometric data for modeling the objects. This research proposes a novel method for data acquisition aiming at unmanned fast and precise measurement. This is come true by the sensor fusion with CCD camera using structured light beam and touch trigger sensor. The vision system provides global information of the objects data. In this case the number of data and position allocation for touch sensor is critical in terms of the productivity since the number of vision data is very huge. So we applied wavelet transform to reduce the number of data and to allocate the position of the touch probe. The simulated and experimental results show this method is good enough for data reduction.

  • PDF

MPC-based Active Steering Control using Multi-rate Kalman Filter for Autonomous Vehicle Systems with Vision (비젼 기반 자율주행을 위한 다중비율 예측기 설계와 모델예측 기반 능동조향 제어)

  • Kim, Bo-Ah;Lee, Young-Ok;Lee, Seung-Hi;Chung, Chung-Choo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.61 no.5
    • /
    • pp.735-743
    • /
    • 2012
  • In this paper, we present model predictive control (MPC) applied to lane keeping system (LKS) based on a vision module. Due to a slow sampling rate of the vision system, the conventional LKS using single rate control may result in uncomfortable steering control rate in a high vehicle speed. By applying MPC using multi-rate Kalman filter to active steering control, the proposed MPC-based active steering control system prevents undesirable saturated steering control command. The effectiveness of the MPC is validated by simulations for the LKS equipped with a camera module having a slow sampling rate on the curved lane with the minimum radius of 250[m] at a vehicle speed of 30[m/s].

Development of a Tank Crew Protection System Using Moving Object Area Detection from Vision based (비전 기반 움직임 영역 탐지를 이용한 전차 승무원 보호 시스템 개발)

  • Choi, Kwang-Mo;Jang, Dong-Sik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.8 no.2 s.21
    • /
    • pp.14-21
    • /
    • 2005
  • This paper describes the system for detecting the tank crew's(loader's) hand, arm, head and the upper half of the body in a danger area between the turret ceiling and the upper breech mechanism by computer vision-based method. This system informs danger of pressed to death to gunner and commander for the safety of operating mission. The camera mounted ort the top portion of the turret ceiling. The system sets search moving object from this image and detects by using change of image, laplacian operator and clustering algorithm in this area. It alarms the tank crews when it's judged that dangerous situation for operating mission. The result In this experiment shows that the detection rate maintains in $81{\sim}98$ percents.

A Study on Efficient Image Processing and CAD-Vision System Interface (효율적인 화상자료 처리와 시각 시스템과 CAD시스템의 인터페이스에 관한 연구)

  • Park, Jin-Woo;Kim, Ki-Dong
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.18 no.2
    • /
    • pp.11-22
    • /
    • 1992
  • Up to now, most researches on production automation have concentrated on local automation, e. g. CAD, CAM, robotics, etc. However, to achieve total automation it is required to link each local modules such as CAD, CAM into a unified and integrated system. One such missing link is between CAD and computer vision system. This thesis is an attempt to link the gap between CAD and computer vision system. In this paper, we propose algorithms that carry out edge detection, thinning and pruning from the image data of manufactured parts, which are obtained from video camera and then transmitted to computer. We also propose a feature extraction and surface determination algorithm which extract informations from the image data. The informations are compatible to IGES CAD data. In addition, we suggest a methodology to reduce search efforts for CAD data bases. The methodology is based on graph submatching algorithm in GEFG(Generalized Edge Face Graph) representation for each part.

  • PDF

A Study on Detection of Object Position and Displacement for Obstacle Recognition of UCT (무인 컨테이너 운반차량의 장애물 인식을 위한 물체의 위치 및 변위 검출에 관한 연구)

  • 이진우;이영진;조현철;손주한;이권순
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 1999.10a
    • /
    • pp.321-332
    • /
    • 1999
  • It is important to detect objects movement for obstacle recognition and path searching of UCT(unmanned container transporters) with vision sensor. This paper shows the method to draw out objects and to trace the trajectory of the moving object using a CCD camera and it describes the method to recognize the shape of objects by neural network. We can transform pixel points to objects position of the real space using the proposed viewport. This proposed technique is used by the single vision system based on floor map.

  • PDF

A Study on the Point Placement Task of Robot System Based on the Vision System (비젼시스템을 이용한 로봇시스템의 점배치실험에 관한 연구)

  • Jang, Wan-Shik;You, Chang-gyou
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.8
    • /
    • pp.175-183
    • /
    • 1996
  • This paper presents three-dimensional robot task using the vision control method. A minimum of two cameras is required to place points on end dffectors of n degree-of-freedom manipulators relative to other bodies. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes known three-axis manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method.

  • PDF

Industrial Bin-Picking Applications Using Active 3D Vision System (능동 3D비전을 이용한 산업용 로봇의 빈-피킹 공정기술)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.26 no.2_2
    • /
    • pp.249-254
    • /
    • 2023
  • The use of robots in automated factories requires accurate bin-picking to ensure that objects are correctly identified and selected. In the case of atypical objects with multiple reflections from their surfaces, this is a challenging task. In this paper, we developed a random 3D bin picking system by integrating the low-cost vision system with the robotics system. The vision system identifies the position and posture of candidate parts, then the robot system validates if one of the candidate parts is pickable; if a part is identified as pickable, then the robot will pick up this part and place it accurately in the right location.

Development of an algorithm for solving correspondence problem in stereo vision (스테레오 비젼에서 대응문제 해결을 위한 알고리즘의 개발)

  • Im, Hyuck-Jin;Gweon, Dae-Gab
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.10 no.1
    • /
    • pp.77-88
    • /
    • 1993
  • In this paper, we propose a stereo vision system to solve correspondence problem with large disparity and sudden change in environment which result from small distance between camera and working objects. First of all, a specific feature is divided by predfined elementary feature. And then these are combined to obtain coded data for solving correspondence problem. We use Neural Network to extract elementary features from specific feature and to have adaptability to noise and some change of the shape. Fourier transformation and Log-polar mapping are used for obtaining appropriate Neural Network input data which has a shift, scale, and rotation invariability. Finally, we use associative memory to obtain coded data of the specific feature from the combination of elementary features. In spite of specific feature with some variation in shapes, we could obtain satisfactory 3-dimensional data from corresponded codes.

  • PDF

Automatic indoor progress monitoring using BIM and computer vision

  • Deng, Yichuan;Hong, Hao;Luo, Han;Deng, Hui
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.252-259
    • /
    • 2017
  • Nowadays, the existing manual method for recording actual progress of the construction site has some drawbacks, such as great reliance on the experience of professional engineers, work-intensive, time consuming and error prone. A method integrating computer vision and BIM(Building Information Modeling) is presented for indoor automatic progress monitoring. The developed method can accurately calculate the engineering quantity of target component in the time-lapse images. Firstly, sample images of on-site target are collected for training the classifier. After the construction images are identified by edge detection and classifier, a voting algorithm based on mathematical geometry and vector operation will divide the target contour. Then, according to the camera calibration principle, the image pixel coordinates are conversed into the real world Coordinate and the real coordinates would be corrected with the help of the geometric information in BIM model. Finally, the actual engineering quantity is calculated.

  • PDF