• Title/Summary/Keyword: Vision Sensing

Search Result 211, Processing Time 0.032 seconds

A Study on the Vision Sensor Using Scanning Beam for Welding Process Automation (용접자동화를 위한 주사빔을 이용한 시각센서에 관한 연구)

  • You, Won-Sang;Na, Suck-Joo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.20 no.3
    • /
    • pp.891-900
    • /
    • 1996
  • The vision sensor which is based on the optical triangulation theory with the laser as an auxiliary light source can detect not only the seam position but the shape of seam. In this study, a vision sensor using the scanning laser beam was investigated. To design the vision sensor which considers the reflectivity of the sensing object and satisfies the desired resolution and measuring range, the equation of the focused laser beam which has a Gaussian irradiance profile was firstly formulated, Secondly, the image formaing sequence, and thirdly the relation between the displacement in the measuring surface and the displacement in the camera plane was formulated. Therefore, the focused beam diameter in the measuring range could be determined and the influence of the relative location between the laser and camera plane could be estimated. The measuring range and the resolution of the vision sensor which was based on the Scheimpflug's condition could also be calculated. From the results mentioned above a vision sensor was developed, and an adequate calibration technique was proposed. The image processing algorithm which and recognize the center of joint and its shape informaitons was investigated. Using the developed vision sensor and image processing algorithm, the shape informations was investigated. Using the developed vision sensor and image processing algorithm, the shape informations of the vee-, butt- and lap joint were extracted.

Recent Trends and Prospects of 3D Content Using Artificial Intelligence Technology (인공지능을 이용한 3D 콘텐츠 기술 동향 및 향후 전망)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Kim, K.N.;Kim, D.H;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.4
    • /
    • pp.15-22
    • /
    • 2019
  • Recent technological advances in three-dimensional (3D) sensing devices and machine learning such as deep leaning has enabled data-driven 3D applications. Research on artificial intelligence has developed for the past few years and 3D deep learning has been introduced. This is the result of the availability of high-quality big data, increases in computing power, and development of new algorithms; before the introduction of 3D deep leaning, the main targets for deep learning were one-dimensional (1D) audio files and two-dimensional (2D) images. The research field of deep leaning has extended from discriminative models such as classification/segmentation/reconstruction models to generative models such as those including style transfer and generation of non-existing data. Unlike 2D learning, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become increasingly popular owing to advances in 3D vision technology, the generation/acquisition of 3D data is still very difficult. Even if 3D data can be acquired, post-processing remains a significant problem. Moreover, it is not easy to directly apply existing network models such as convolution networks owing to the various ways in which 3D data is represented. In this paper, we summarize technological trends in AI-based 3D content generation.

The Application of the Welding Joint Tracking System (용접 이음 추적시스템의 응용)

  • Lee, Jeong-Ick;Koh, Byung-Kab
    • Transactions of the Korean Society of Machine Tool Engineers
    • /
    • v.16 no.2
    • /
    • pp.92-99
    • /
    • 2007
  • Welding fabrication invariantly involves three district sequential steps: preparation, actual process execution and post-weld inspection. One of the major problems in automating these steps and developing autonomous welding systems, is the lack of proper sensing strategies. Conventionally, machine vision is used in robotic arc welding only for the correction of pre-taught welding paths in single pass. In this paper, novel presented, developed vision processing techniques are detailed, and their application in welding fabrication is covered. The software for joint tracking system is finally proposed.

Development of a vision sensor for measuring the weld groove parameters in arc welding process (자동 아크 용접공정의 용접개선변수 측정을 위한 시각 시스템)

  • 김호학;부광석;조형석
    • Journal of Welding and Joining
    • /
    • v.8 no.2
    • /
    • pp.58-69
    • /
    • 1990
  • In conventional arc welding, position error of the weld torch with respect to the weld seam and variation of groove dimension are induced by inaccurate fitup and fixturing. In this study, a vision system has been developed to recognize and compensate the position error and dimensional inaccuracy. The system uses a structured laser light illuminated on the weld groove and perceived by a C.C.D camera. A new algorithm to detect the edge of the reflected laser light is introduced for real time processing. The developed system was applied to arbitarary weld paths with various types of joint in arc welding process. The experimental results show that the proposed system can detect the weld groove parameters within good accuracy and yield good tracking performance.

  • PDF

Development of a Robot arm capable of recognizing 3-D object using stereo vision

  • Kim, Sungjin;Park, Seungjun;Park, Hongphyo;Sangchul Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.128.6-128
    • /
    • 2001
  • In this paper, we present a methodology of sensing and control for a robot system designed to be capable of grasping an object and moving it to target point Stereo vision system is employed to determine to depth map which represents the distance from the camera. In stereo vision system we have used a center-referenced projection to represent the discrete match space for stereo correspondence. This center-referenced disparity space contains new occlusion points in addition to the match points which we exploit to create a concise representation of correspondence an occlusion. And from the depth map we find the target object´s pose and position in 3-D space. To find the target object´s pose and position, we use the method of the model-based recognition.

  • PDF

Determination of Object Position Using Robot Vision (로보트 비전을 이용한 대상물체의 위치 결정에 관한 연구)

  • Park, K.T.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.9
    • /
    • pp.104-113
    • /
    • 1996
  • In robot system, the robot manipulation needs the information of task and objects to be handled in possessing a variaty of positions and orientations. In the current industrial robot system, determining position and orientation of objects under industrial environments is one of major problems. In order to pick up an object, the roblt needs the information about the position and orientation of object, and between objects and gripper. When sensing is accomplished by pinhole model camera, the mathematical relationship between object points and their images is expressed in terms of perspective, i.e., central projection. In this paper, a new approach to determine the information of the supporting points related to position and orientation of the object using the robot vision system is developed and testified in experimental setup. The result will be useful for the industrial, agricultural, and autonomous robot.

  • PDF

Automatic Seam Tracking for Plasma Arc Welding of a Corrugation Panel (파형부재의 플라즈마 아크용접을 위한 자동 용접선 추적)

  • Yang, Joo-Woong;Park, Young-Jun
    • Proceedings of the KSME Conference
    • /
    • 2003.11a
    • /
    • pp.1506-1511
    • /
    • 2003
  • This paper describes an automatic weld seam tracking method of plasma arc welding system designed for the corrugation panel that consists of a linear section and a curved section with various curvatures. Due to the complexity of the panel shape, it is difficult to find a seam and operate a torch manually in the welding process. So, the laser vision sensor for seam tracking is designed for sensing the seam position and controlling a torch automatically. To achieve precise seam tracking, the design of sensor head, image simulation, and calibration are carried out. Through a series of experiment result, compensation algorithm is added and real time error compensation is achieved. The experiment result shows that this vision sensor works effectively. It will provide more precise welding performance and convenience to the operator.

  • PDF

Self Localization of Mobile Robot Using Sonar Sensing and Map Building

  • Kim, Ji-Min;Lee, Ki-Seong;Jeong, Tae-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1931-1935
    • /
    • 2004
  • A location estimate problem is critical issues for mobile robot. Because it is basic problem in practical use of the mobile robot which do what, or move where, or reach an aim. Already there are many technologies of robot localization (like GPS, vision, sonar sensor, etc) used on development. But the elevation of accurateness was brought the problem that must consider an increase of a hardware cost and addition electric power in each ways. There is the core in question to develop available and accurate sensing algorithm though it is economical. We used a ultrasonic sensor and was going to implement comparatively accurate localization though economical. Using a sensing data, we could make a grid map and estimate a position of a mobile robot. In this paper, to get a satisfactory answer about this problem using a ultrasonic sensor.

  • PDF

A New Linear Explicit Camera Calibration Method (새로운 선형의 외형적 카메라 보정 기법)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.23 no.1
    • /
    • pp.66-71
    • /
    • 2014
  • Vision is the most important sensing capability for both men and sensory smart machines, such as intelligent robots. Sensed real 3D world and its 2D camera image can be related mathematically by a process called camera calibration. In this paper, we present a novel linear solution of camera calibration. Unlike most existing linear calibration methods, the proposed technique of this paper can identify camera parameters explicitly. Through the step-by-step procedure of the proposed method, the real physical elements of the perspective projection transformation matrix between 3D points and the corresponding 2D image points can be identified. This explicit solution will be useful for many practical 3D sensing applications including robotics. We verified the proposed method by using various cameras of different conditions.

An Analysis for Urban Change Using Satellite Images and GIS (GIS와 위성영상을 이용한 도시의 변화량 분석)

  • Shin, Ke-Jong;Yu, Young-Geol;Hwang, Eui-Jin
    • Journal of the Korean GEO-environmental Society
    • /
    • v.6 no.4
    • /
    • pp.73-80
    • /
    • 2005
  • The domestic Remote Sensing field uses mainly Landsat TM image that is used to the monitoring of the wide area. In this study, it is analyzed the land cover change of rural and urban area by time series using satellite images and is proposed the vision for a urban balanced development. It execute an analysis for urban change which is a fundamental data of city planning through the integration of the spatial analysis technique of GIS and Remote Sensing using satellite data.

  • PDF