• Title/Summary/Keyword: vision of teaching

Search Result 97, Processing Time 0.024 seconds

Implementation of Automatic Teaching System for Subassembly Process in Shipbuilding (선박 소조립 공정용 로봇 자동교시 시스템의 구현)

  • 김정호;유중돈;김진오;신정식;김성권
    • Journal of Welding and Joining
    • /
    • v.14 no.2
    • /
    • pp.96-105
    • /
    • 1996
  • Robot systems are widely utilized in the shipbuilding industry to enhance the productivity by automating the welding process. In order to increase productivity, it is necessary to reduce the time used for robot teaching. In this work, the automatic teaching system is developed for the subassembly process in the shipbuilding industry. A alser/vision sensor is designed to detect the weld seam and the image of the fillet joint is processed using the arm method. Positions of weld seams defined in the CAD database are transformed into the robot coordinate, and the dynamic programming technique is applied to find the sub-optimum weld path. Experiments are carried out to verify the system performance. The results show that the proposed automatic teaching system performs successfully and can be applied to the robot system in the subassembly process.

  • PDF

Teaching Assistant System using Computer Vision (컴퓨터 비전을 이용한 강의 도우미 시스템)

  • Kim, Tae-Jun;Park, Chang-Hoon;Choi, Kang-Sun
    • Journal of Practical Engineering Education
    • /
    • v.5 no.2
    • /
    • pp.109-115
    • /
    • 2013
  • In this paper, a teaching assistant system using computer vision is presented. Using the proposed system, lecturers can utilize various lecture contents such as lecture notes and related video clips easily and seamlessly. In order to do transition between different lecture contents and control multimedia contents, lecturers just draw pre-defined symbols on the board without pausing the class. In the proposed teaching assistant system, a feature descriptor, so called shape context, is used for recognizing the pre-defined symbols successfully.

An Automatic Teaching Method by Vision Information for A Robotic Assembly System

  • Ahn, Cheol-Ki;Lee, Min-Cheol;Kim, Jong-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.65-68
    • /
    • 1999
  • In this study, an off-line automatic teaching method using vision information for robotic assembly task is proposed. Many of industrial robots are still taught and programmed by a teaching pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and played back repetitively to perform the robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and transferred to the robot controller. This teaching process is implemented through an off-line programming(OLP) software. The OLP is developed for the robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on the assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line automatic teaching.

  • PDF

A User Interface for Vision Sensor based Indirect Teaching of a Robotic Manipulator (시각 센서 기반의 다 관절 매니퓰레이터 간접교시를 위한 유저 인터페이스 설계)

  • Kim, Tae-Woo;Lee, Hoo-Man;Kim, Joong-Bae
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.10
    • /
    • pp.921-927
    • /
    • 2013
  • This paper presents a user interface for vision based indirect teaching of a robotic manipulator with Kinect and IMU (Inertial Measurement Unit) sensors. The user interface system is designed to control the manipulator more easily in joint space, Cartesian space and tool frame. We use the skeleton data of the user from Kinect and Wrist-mounted IMU sensors to calculate the user's joint angles and wrist movement for robot control. The interface system proposed in this paper allows the user to teach the manipulator without a pre-programming process. This will improve the teaching time of the robot and eventually enable increased productivity. Simulation and experimental results are presented to verify the performance of the robot control and interface system.

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

T-joint Laser Welding of Circular and Square Pipes Using the Vision Tracking System (용접선 추적 비전장치를 이용한 원형-사각 파이프의 T형 조인트 레이저용접)

  • Son, Yeong-Il;Park, Gi-Yeong;Lee, Gyeong-Don
    • Laser Solutions
    • /
    • v.12 no.1
    • /
    • pp.19-24
    • /
    • 2009
  • Because of its fast and precise welding performance, laser welding is becoming a new excellent welding method. However, the precise focusing and robust seam tracking are required to apply laser welding to the practical fields. In order to laser weld a type of T joint like a circular pipe on a square pipe, which could be met in the three dimensional structure such as an aluminum space frame, a visual sensor system was developed for automation of focusing and seam tracking. The developed sensor system consists of a digital CCD camera, a structured laser, and a vision processor. It is moved and positioned by a 2-axis motorized stage, which is attached to a 6 axis robot manipulator with a laser welding head. After stripe-type structured laser illuminates a target surface, images are captured through the digital CCD camera. From the image, seam error and defocusing error are calculated using image processing algorithms which includes efficient techniques handling continuously changed image patterns. These errors are corrected by the stage off-line during welding or teaching. Laser welding of a circular pipe on a square pipe was successful with the vision tracking system by reducing the path positioning and de focusing errors due to the robot teaching or a geometrical variation of specimens and jig holding.

  • PDF

Preservice Science Teachers' Previous Experience, Beliefs, and Visions of Science Teaching and Learning

  • Kang, Kyung-Hee;Lee, Sun-Kyung
    • Journal of The Korean Association For Science Education
    • /
    • v.24 no.1
    • /
    • pp.90-108
    • /
    • 2004
  • This study is to understand preservice science teachers' previous experience, beliefs about teaching and learning, and visions of themselves as future teachers. The data were collected from two individual interviews with 7 voluntary students and analyzed qualitatively for category construction. As the results of this study, we presented two cases, which showed that their different views of teaching science are strongly related to their previous experiences as learners and observers in schools, and that there is the apparent consistency between each participant's beliefs about science teaching and learning and their own visions of teaching in a science classroom. Implications for preservice science teacher education related to the results were discussed.

3-D vision sensor system for arc welding robot with coordinated motion by transputer system

  • Ishida, Hirofumi;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1993.10b
    • /
    • pp.446-450
    • /
    • 1993
  • In this paper we propose an arc welding robot system, where two robots works coordinately and employ the vision sensor. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. The vision sensor consists of two laser slit-ray projectors and one CCD TV camera, and is mounted on the top of one robot. The vision sensor detects the 3-dimensional shape of the groove on the target work which needs to be weld. And two robots are moved coordinately to trace the grooves with accuracy. In order to realize fast image processing, totally five sets of high-speed parallel processing units (Transputer) are employed. The teaching tasks of the coordinated motions are simplified considerably due to this vision sensor. Experimental results reveal the applicability of our system.

  • PDF

New Learning Environment of Linear Algebra in Korea

  • Lee Sang-Gu;Han Yoonmee
    • Research in Mathematical Education
    • /
    • v.9 no.1 s.21
    • /
    • pp.59-68
    • /
    • 2005
  • We are introducing a new learning environment for linear algebra at Sungkyunkwan University, and this is changing our teaching methods. Korea's e-Campus Vision 2007 is a program begun in 2003, to equip lecture rooms with projection equipment, View cam, tablet PC and internet D-base. Now our linear algebra classes at Sungkyunkwan University can be taught in a modem learning environment. Lectures can easily being recorded and students can review them right after class. At Sungkyunkwan University almost $100\%$ of all large and medium size lecture rooms have been remodeled by Mar. 2005 and are in use. We introduce this system in detail and how this learning environment changed our teaching method. Analysis of the positive effect will be added.

  • PDF

Utilization of Vision in Off-Line Teaching for assembly robot (조립용 로봇의 오프라인 교시를 위한 영상 정보의 이용에 관한 연구)

  • 안철기
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.543-548
    • /
    • 2000
  • In this study, an interactive programming method for robot in electronic part assembly task is proposed. Many of industrial robots are still taught and programmed by a teach pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and play back repetitively to perform robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and downloaded to the robot controller. This teaching process is implemented through an off-line programming software. The OLP is developed for an robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on an assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line teaching in the system.

  • PDF