• Title/Summary/Keyword: Camera Teaching

Search Result 33, Processing Time 0.031 seconds

Construction of a Web-based e-Teaching Portfolio for the Efficient Management

  • Kim, Yun-Hae;Park, Se-Ho;Ha, Jin-Cheol
    • Journal of Engineering Education Research
    • /
    • v.15 no.4
    • /
    • pp.35-40
    • /
    • 2012
  • This study presents an analysis of the current situation (management, approach, adjustment, transportation, and others) of teaching portfolio by examining the teaching portfolio managers (staffs, researchers, teaching assistants, etc.) of 6 universities in the southeast of Korea. The rationale for the study focus is that the existing teaching portfolio either suffers a problem in the transportation, approach, adjustment and/or management or is likely to raise a problem in the future. In order to solve this problem, this study builds a web-based e-teaching portfolio. According to the analysis results, the engineering education system was established in all 6 universities (Ed- note that '6 universities' has already been specified as the study sample). The teaching portfolio was partially digitalized in this system, despite some problems of converting analog data into digital data, which induced difficulties in constructing the overall e-teaching portfolio. Therefore, this study focused on constructing an e-teaching portfolio without developing any additional system by using the existing system positively, and also on determining the appropriate components among the existing teaching portfolio components. Accordingly, in order to convert the analog data into the digital data required for this study, we used a digital camera as the conversion device and converted the teaching portfolio components into those appropriate for the e-teaching portfolio. Finally, we constructed an existing system appropriate for the e-teaching portfolio by using these devices and components.

Utilization of Vision in Off-Line Teaching for assembly robot (조립용 로봇의 오프라인 교시를 위한 영상 정보의 이용에 관한 연구)

  • 안철기
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.543-548
    • /
    • 2000
  • In this study, an interactive programming method for robot in electronic part assembly task is proposed. Many of industrial robots are still taught and programmed by a teach pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and play back repetitively to perform robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and downloaded to the robot controller. This teaching process is implemented through an off-line programming software. The OLP is developed for an robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on an assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line teaching in the system.

  • PDF

An Automatic Teaching Method by Vision Information for A Robotic Assembly System

  • Ahn, Cheol-Ki;Lee, Min-Cheol;Kim, Jong-Hyung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1999.10a
    • /
    • pp.65-68
    • /
    • 1999
  • In this study, an off-line automatic teaching method using vision information for robotic assembly task is proposed. Many of industrial robots are still taught and programmed by a teaching pendant. The robot is guided by a human operator to the desired application locations. These motions are recorded and are later edited, within the robotic language using in the robot controller, and played back repetitively to perform the robot task. This conventional teaching method is time-consuming and somewhat dangerous. In the proposed method, the operator teaches the desired locations on the image acquired through CCD camera mounted on the robot hand. The robotic language program is automatically generated and transferred to the robot controller. This teaching process is implemented through an off-line programming(OLP) software. The OLP is developed for the robotic assembly system used in this study. In order to transform the location on image coordinates into robot coordinates, a calibration process is established. The proposed teaching method is implemented and evaluated on the assembly system for soldering electronic parts on a circuit board. A six-axis articulated robot executes assembly task according to the off-line automatic teaching.

  • PDF

A Study on Visual Servoing Application for Robot OLP Compensation (로봇 OLP 보상을 위한 시각 서보잉 응용에 관한 연구)

  • 김진대;신찬배;이재원
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.21 no.4
    • /
    • pp.95-102
    • /
    • 2004
  • It is necessary to improve the exactness and adaptation of the working environment in the intelligent robot system. The vision sensor have been studied for this reason fur a long time. However, it is very difficult to perform the camera and robot calibrations because the three dimensional reconstruction and many processes are required for the real usages. This paper suggests the image based visual servoing to solve the problem of old calibration technique and supports OLP(Off-Line-Programming) path compensation. Virtual camera can be modeled from the real factors and virtual images obtained from virtual camera gives more easy perception process. Also, Initial path generated from OLP could be compensated by the pixel level acquired from the real and virtual, respectively. Consequently, the proposed visually assisted OLP teaching remove the calibration and reconstruction process in real working space. With a virtual simulation, the better performance is observed and the robot path error is calibrated by the image differences.

Automatic Extraction of Component Window for Auto-Teaching of PCB Assembly Inspection Machines (PCB 조립검사기의 자동티칭을 위한 부품윈도우 자동추출 방법)

  • Kim, Jun-Oh;Park, Tae-Hyoung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.11
    • /
    • pp.1089-1095
    • /
    • 2010
  • We propose an image segmentation method for auto-teaching system of PCB (Printed Circuit Board) assembly inspection machines. The inspection machine acquires images of all components in PCB, and then compares each image with its standard image to find the assembly errors such as misalignment, inverse polarity, and tombstone. The component window that is the area of component to be acquired by camera, is one of the teaching data for operating the inspection machines. To reduce the teaching time of the machine, we newly develop the image processing method to extract the component window automatically from the image of PCB. The proposed method segments the component window by excluding the soldering parts as well as board background. We binarize the input image by use of HSI color model because it is difficult to discriminate the RGB colors between components and backgrounds. The linear combination of the binarized images then enhances the component window from the background. By use of the horizontal and vertical projection of histogram, we finally obtain the component widow. The experimental results are presented to verify the usefulness of the proposed method.

Development of Automation System of Assembly Line On the Back Cover of a Camera (카메라 백 카버 생산 조립 라인의 자동화 시스템 개발)

  • 이만형
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.04a
    • /
    • pp.153-158
    • /
    • 2000
  • This paper addresses an intelligent robot control system using an off-line programming to teach a precise assembly task of electronic components in a flexible way. The investigated task consists of three job: heat caulking test, soldering on a circuit board, and checking of soldering defects on the back cover of a camera. This study investigates the remodelling of the most complicated cell in terms of the accuracy and fault rate among the twelve cells in a camera back-cover assembly line. We have attempted to enhance back-cover assembly line. We have attempted to enhance soldering quality, to add task flexibility, to reduce failure rate, and to increase product reliability. This study modifies the cell structure, and improves the soldering condition. The developed all system implements the real-time control of assembly with vision data, and realized an easier task teaching on off-line programming.

  • PDF

T-joint Laser Welding of Circular and Square Pipes Using the Vision Tracking System (용접선 추적 비전장치를 이용한 원형-사각 파이프의 T형 조인트 레이저용접)

  • Son, Yeong-Il;Park, Gi-Yeong;Lee, Gyeong-Don
    • Laser Solutions
    • /
    • v.12 no.1
    • /
    • pp.19-24
    • /
    • 2009
  • Because of its fast and precise welding performance, laser welding is becoming a new excellent welding method. However, the precise focusing and robust seam tracking are required to apply laser welding to the practical fields. In order to laser weld a type of T joint like a circular pipe on a square pipe, which could be met in the three dimensional structure such as an aluminum space frame, a visual sensor system was developed for automation of focusing and seam tracking. The developed sensor system consists of a digital CCD camera, a structured laser, and a vision processor. It is moved and positioned by a 2-axis motorized stage, which is attached to a 6 axis robot manipulator with a laser welding head. After stripe-type structured laser illuminates a target surface, images are captured through the digital CCD camera. From the image, seam error and defocusing error are calculated using image processing algorithms which includes efficient techniques handling continuously changed image patterns. These errors are corrected by the stage off-line during welding or teaching. Laser welding of a circular pipe on a square pipe was successful with the vision tracking system by reducing the path positioning and de focusing errors due to the robot teaching or a geometrical variation of specimens and jig holding.

  • PDF

Effective teaching using textbooks and AI web apps (교과서와 AI 웹앱을 활용한 효과적인 교육방식)

  • Sobirjon, Habibullaev;Yakhyo, Mamasoliev;Kim, Ki-Hawn
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.01a
    • /
    • pp.211-213
    • /
    • 2022
  • Images in the textbooks influence the learning process. Students often see pictures before reading the text and these pictures can enhance the power of imagination of the students. The findings of some researches show that the images in textbooks can increase students' creativity. However, when learning major subjects, reading a textbook or looking at a picture alone may not be enough to understand the topics and completely realize the concepts. Studies show that viewers remember 95% of a message when watching a video than reading a text. If we can combine textbooks and videos, this teaching method is fantastic. The "TEXT + IMAGE + VIDEO (Animation)" concept could be more beneficial than ordinary ones. We tried to give our solution by using machine learning Image Classification. This paper covers the features, approaches and detailed objectives of our project. For now, we have developed the prototype of this project as a web app and it only works when accessed via smartphone. Once you have accessed the web app through your smartphone, the web app asks for access to use the camera. Suppose you bring your smartphone's camera closer to the picture in the textbook. It will then display the video related to the photo below.

  • PDF

A Study on the Arc Position which Influence on Quality of Plug Welding in the Vehicle Body (차체 플러그 용접품질에 영향을 미치는 아크 위치에 대한 실험적 기초 연구)

  • Lee, Kyung-Min;Kim, Jae-Seong;Lee, Bo-Young
    • Journal of Welding and Joining
    • /
    • v.30 no.3
    • /
    • pp.66-70
    • /
    • 2012
  • Welding is an essential process in the automotive industry. Most welding processes that are used for auto body is spot welding. And $CO_2$ arc welding is used in a small part. In production field, $CO_2$ arc welding process is decreased and spot welding process is increased due to welding quality is poor and defects are occurred in $CO_2$ arc welding process frequently. But $CO_2$ arc welding process should be used at robot interference parts and closed parts where spot welding couldn't. $CO_2$ welding is divided into lap welding and plug arc spot welding. In case of plug arc spot welding, burn through and under fill were caused in various welding environment such as different thickness combinations of base metal, teaching point, over the two steps welding and inconsistent voltage/current. It makes some problem like poor quality of welding area and decrease the productivity. In this study, we will evaluate the effect of teaching point through the weld pool behavior and bead geometry in the arc spot welding at the plut hole. Welding position is horizontal position. And galvanized steel sheet of 2.0mm thickness that has plug hole of 6mm diameter was used. Teaching point was changed by center, top, bottom, left and right of the plug hole. At each condition, the phenomenon of weld pool behavior was confirmed using a high-speed camera. As the result, we find the center of plug hole is the most optimal teaching point. In the other teaching point, under fill was occurred at the plug hole. This phenomenon is caused by gravity and surface tension. For performance of arc spot welding at the plug hole, the teaching condition should be controlled at a center of plug hole.

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF