• 제목/요약/키워드: vision camera

검색결과 1,376건 처리시간 0.028초

비젼을 이용한 LDM의 위치 제어 방식 (A position control method of LDM using vision system)

  • 김영렬;김주웅;엄기환;이현관
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 V
    • /
    • pp.2505-2508
    • /
    • 2003
  • In this paper, we propose the method to control the position of LDM(Linear DC Motor) using vision system. The proposed method is composed of a vision system for position detecting, and main computer calculates PID control output which is deliver to 80il actuator circuit in serial communication. To confirm the usefulness of the proposed method, we experimented about position control of a small size LDM using CCD camera which has a performance 30frames/sec as vision system.

  • PDF

Automated Optical Inspection 시스템의 이미지 획득과정을 전산모사하는 Vision Inspector 개발 (Development of Vision Inspector for Simulating Image Acquisition in Automated Optical Inspection System)

  • 정상철;고낙훈;김대찬;서승원;최태일;이승걸
    • 한국광학회:학술대회논문집
    • /
    • 한국광학회 2008년도 하계학술발표회 논문집
    • /
    • pp.403-404
    • /
    • 2008
  • This report described the development of Vision Inspector program which can simulate numerically the image acquisition process of Machine Vision System for automatic optical inspection of any products. The program consists of an illuminator, a product to be inspected, and a camera with image sensor, and the final image obtained by ray tracing.

  • PDF

로봇 축구 대회를 위한 영상 처리 시스템 (A Vision System for ]Robot Soccer Game)

  • 고국원;최재호;김창효;김경훈;김주곤;이수호;조형석
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 1996년도 추계학술대회 논문집
    • /
    • pp.434-438
    • /
    • 1996
  • In this paper we present the multi-agent robot system and the vision system developed for participating in micro robot soccer tournament. The multi-agent robot system consists of micro robot, a vision system, a host computer and a communication module. Micro robot are equipped with two mini DC motors witf encoders and gearboxes, a R/F receiver, a CPU and infrared sensors for obstacle detection. A vision system is used to recognize the position of the ball and opponent robots, position and orientation of our robots. The vision system is composed of a color CCD camera and a vision processing unit(AISI vision computer). The vision algorithm is based on morphological method. And it takes about 90 msec to detect ball and 3-our robots and 3-opponent robots with reasonable accuracy

  • PDF

용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구 (A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation)

  • 유원상;나석주
    • 대한기계학회논문집A
    • /
    • 제20권3호
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

Vision 시스템의 차량 인식률 향상에 관한 연구 (A Study on the Improvement of Vehicle Recognition Rate of Vision System)

  • 오주택;이상용;이상민;김영삼
    • 한국ITS학회 논문지
    • /
    • 제10권3호
    • /
    • pp.16-24
    • /
    • 2011
  • 차량의 전자제어 시스템은 운전자의 안전을 확보하려는 법률적, 사회적 요구에 발맞추어 빠르게 발달하고 있으며, 하드웨어의 가격하락과 센서 및 프로세서의 고성능화에 따라 레이더, 카메라, 레이저와 같은 다양한 센서를 적용한 다양한 운전자 지원 시스템 (Driver Assistance System)이 실용화되고 있다. 이에 본 연구의 선행연구에서는 CCD 카메라로부터 취득되는 영상을 이용하여 실험차량의 주행 차선 및 주변에 위치 하거나 접근하는 차량을 인식하여 운전자의 위험운전에 대한 원인 및 결과를 분석 할 수 있는 Vision 시스템 기반 위험운전 분석 프로그램을 개발하였다. 그러나 선행 연구에서 개발된 Vision 시스템은 터널, 일출, 일몰과 같이 태양광이 충분치 않은 곳에서는 차선 및 차량의 인식율이 매우 떨어지는 것으로 나타났다. 이에 본 연구에서는 밝기 대응 알고리즘을 개발하여 Vision 시스템에 탑재함으로서 언제, 어느 곳에서라도 차선 및 차량에 대한 인식율을 향상시켜 운전자의 위험운전에 대한 원인을 명확하게 분석하고자 한다.

로봇 OLP 보상을 위한 시각 서보잉 응용에 관한 연구 (A Study on Visual Servoing Application for Robot OLP Compensation)

  • 김진대;신찬배;이재원
    • 한국정밀공학회지
    • /
    • 제21권4호
    • /
    • pp.95-102
    • /
    • 2004
  • It is necessary to improve the exactness and adaptation of the working environment in the intelligent robot system. The vision sensor have been studied for this reason fur a long time. However, it is very difficult to perform the camera and robot calibrations because the three dimensional reconstruction and many processes are required for the real usages. This paper suggests the image based visual servoing to solve the problem of old calibration technique and supports OLP(Off-Line-Programming) path compensation. Virtual camera can be modeled from the real factors and virtual images obtained from virtual camera gives more easy perception process. Also, Initial path generated from OLP could be compensated by the pixel level acquired from the real and virtual, respectively. Consequently, the proposed visually assisted OLP teaching remove the calibration and reconstruction process in real working space. With a virtual simulation, the better performance is observed and the robot path error is calibrated by the image differences.

단일 비전에서 칼만 필티와 차선 검출 필터를 이용한 모빌 로봇 주행 위치.자세 계측 제어에 관한 연구 (A Study on Measurement and Control of position and pose of Mobile Robot using Ka13nan Filter and using lane detecting filter in monocular Vision)

  • 이용구;송현승;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.81-81
    • /
    • 2000
  • We use camera to apply human vision system in measurement. To do that, we need to know about camera parameters. The camera parameters are consisted of internal parameters and external parameters. we can fix scale factor&focal length in internal parameters, we can acquire external parameters. And we want to use these parameters in automatically driven vehicle by using camera. When we observe an camera parameters in respect with that the external parameters are important parameters. We can acquire external parameter as fixing focal length&scale factor. To get lane coordinate in image, we propose a lane detection filter. After searching lanes, we can seek vanishing point. And then y-axis seek y-sxis rotation component(${\beta}$). By using these parameter, we can find x-axis translation component(Xo). Before we make stepping motor rotate to be y-axis rotation component(${\beta}$), '0', we estimate image coordinates of lane at (t+1). Using this point, we apply this system to Kalman filter. And then we calculate to new parameters whick make minimum error.

  • PDF

단일카메라 3차원 입자영상추적유속계-액적내부 유동측정 (Single Camera 3D-Particle Tracking Velocimetry-Measurements of the Inner Flows of a Water Droplet)

  • 도덕희;성형진;김동혁;조경래;편용범;조용범
    • 한국가시화정보학회:학술대회논문집
    • /
    • 한국가시화정보학회 2006년도 추계학술대회 논문집
    • /
    • pp.1-6
    • /
    • 2006
  • Single-Camera Stereoscopic Vision three-dimensional measurement system has been developed based upon 30-PTV algorithm. The system consists of one camera $(1k\times1k)$ and a host computer. To attain three-dimensional measurements a plate having stereo holes has been installed inside of the lens system. Three-dimensional measurements was successfully attained by adopting the conventional 30-PTV camera calibration methods. As applications of the constructed measurement system, a water droplet mixed with alcohol was constructed on a transparent plastic plate with the contacted fluid diameter 4mm, and the particles motions inside of the droplet have been investigated with the constructed measurement system. The measurement uncertainty of the constructed system was 0.04mm, 0.04mm and 0.09mm for X, Y and Z coordinates.

  • PDF

수직이착륙 무인항공기 자동 착륙을 위한 영상기반 항법 (Vision-based Navigation for VTOL Unmanned Aerial Vehicle Landing)

  • 이상훈;송진모;배종수
    • 한국군사과학기술학회지
    • /
    • 제18권3호
    • /
    • pp.226-233
    • /
    • 2015
  • Pose estimation is an important operation for many vision tasks. This paper presents a method of estimating the camera pose, using a known landmark for the purpose of autonomous vertical takeoff and landing(VTOL) unmanned aerial vehicle(UAV) landing. The proposed method uses a distinctive methodology to solve the pose estimation problem. We propose to combine extrinsic parameters from known and unknown 3-D(three-dimensional) feature points, and inertial estimation of camera 6-DOF(Degree Of Freedom) into one linear inhomogeneous equation. This allows us to use singular value decomposition(SVD) to neatly solve the given optimization problem. We present experimental results that demonstrate the ability of the proposed method to estimate camera 6DOF with the ease of implementation.

비전 카메라 기반의 무논환경 자율주행 로봇을 위한 중심영역 추출 정보를 이용한 주행기준선 추출 알고리즘 (Guidance Line Extraction Algorithm using Central Region Data of Crop for Vision Camera based Autonomous Robot in Paddy Field)

  • 최근하;한상권;박광호;김경수;김수현
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.1-8
    • /
    • 2016
  • In this paper, we propose a new algorithm of the guidance line extraction for autonomous agricultural robot based on vision camera in paddy field. It is the important process for guidance line extraction which finds the central point or area of rice row. We are trying to use the central region data of crop that the direction of rice leaves have convergence to central area of rice row in order to improve accuracy of the guidance line. The guidance line is extracted from the intersection points of extended virtual lines using the modified robust regression. The extended virtual lines are represented as the extended line from each segmented straight line created on the edges of the rice plants in the image using the Hough transform. We also have verified an accuracy of the proposed algorithm by experiments in the real wet paddy.