• Title/Summary/Keyword: Vision sensor

Search Result 833, Processing Time 0.025 seconds

A Study on Adaptive Control to Fill Weld Groove by Using Multi-Torches in SAW (SAW 용접시 다중 토치를 이용한 용접부 적응제어에 관한 연구)

  • 문형순;정문영;배강열
    • Journal of Welding and Joining
    • /
    • v.17 no.6
    • /
    • pp.90-99
    • /
    • 1999
  • Significant portion of the total manufacturing time for a pipe fabrication process is spent on the welding following primary machining and fit-up processes. To achieve a reliable weld bead appearance, automatic seam tracking and adaptive control to fill the groove are urgently needed. For the seam tracking in welding processes, the vision sensors have been successfully applied. However, the adaptive filling control of the multi-torches system for the appropriate welded area has not been implemented in the area of SAW(submerged arc welding) by now. The term adaptive control is often used to describe recent advances in welding process control by strictly this only applies to a system which is able to cope with dynamic changes in system performance. In welding applications, the term adaptive control may not imply the conventional control theory definition but may be used in the more descriptive sense to explain the need for the process to adapt to the changing welding conditions. This paper proposed various types of methodologies for obtaining a good bead appearance based on multi-torches welding system with the vision system in SAW. The methodologies for adaptive filling control used welding current/voltage, arc voltage/welding current/wire feed speed combination and welding speed by using vision sensor. It was shown that the algorithm for welding current/voltage combination and welding speed revealed sound weld bead appearance compared with that of voltage/current combination.

  • PDF

3D Omni-directional Vision SLAM using a Fisheye Lens Laser Scanner (어안 렌즈와 레이저 스캐너를 이용한 3차원 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.634-640
    • /
    • 2015
  • This paper proposes a novel three-dimensional mapping algorithm in Omni-Directional Vision SLAM based on a fisheye image and laser scanner data. The performance of SLAM has been improved by various estimation methods, sensors with multiple functions, or sensor fusion. Conventional 3D SLAM approaches which mainly employed RGB-D cameras to obtain depth information are not suitable for mobile robot applications because RGB-D camera system with multiple cameras have a greater size and slow processing time for the calculation of the depth information for omni-directional images. In this paper, we used a fisheye camera installed facing downwards and a two-dimensional laser scanner separate from the camera at a constant distance. We calculated fusion points from the plane coordinates of obstacles obtained by the information of the two-dimensional laser scanner and the outline of obstacles obtained by the omni-directional image sensor that can acquire surround view at the same time. The effectiveness of the proposed method is confirmed through comparison between maps obtained using the proposed algorithm and real maps.

Vision-based Reduction of Gyro Drift for Intelligent Vehicles (지능형 운행체를 위한 비전 센서 기반 자이로 드리프트 감소)

  • Kyung, MinGi;Nguyen, Dang Khoi;Kang, Taesam;Min, Dugki;Lee, Jeong-Oog
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.7
    • /
    • pp.627-633
    • /
    • 2015
  • Accurate heading information is crucial for the navigation of intelligent vehicles. In outdoor environments, GPS is usually used for the navigation of vehicles. However, in GPS-denied environments such as dense building areas, tunnels, underground areas and indoor environments, non-GPS solutions are required. Yaw-rates from a single gyro sensor could be one of the solutions. In dealing with gyro sensors, the drift problem should be resolved. HDR (Heuristic Drift Reduction) can reduce the average heading error in straight line movement. However, it shows rather large errors in some moving environments, especially along curved lines. This paper presents a method called VDR (Vision-based Drift Reduction), a system which uses a low-cost vision sensor as compensation for HDR errors.

Identification of structural systems and excitations using vision-based displacement measurements and substructure approach

  • Lei, Ying;Qi, Chengkai
    • Smart Structures and Systems
    • /
    • v.30 no.3
    • /
    • pp.273-286
    • /
    • 2022
  • In recent years, vision-based monitoring has received great attention. However, structural identification using vision-based displacement measurements is far less established. Especially, simultaneous identification of structural systems and unknown excitation using vision-based displacement measurements is still a challenging task since the unknown excitations do not appear directly in the observation equations. Moreover, measurement accuracy deteriorates over a wider field of view by vision-based monitoring, so, only a portion of the structure is measured instead of targeting a whole structure when using monocular vision. In this paper, the identification of structural system and excitations using vision-based displacement measurements is investigated. It is based on substructure identification approach to treat of problem of limited field of view of vision-based monitoring. For the identification of a target substructure, substructure interaction forces are treated as unknown inputs. A smoothing extended Kalman filter with unknown inputs without direct feedthrough is proposed for the simultaneous identification of substructure and unknown inputs using vision-based displacement measurements. The smoothing makes the identification robust to measurement noises. The proposed algorithm is first validated by the identification of a three-span continuous beam bridge under an impact load. Then, it is investigated by the more difficult identification of a frame and unknown wind excitation. Both examples validate the good performances of the proposed method.

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31 (다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation)

  • Oksenhendler, V.;Bensrhair, Abdelaziz;Miche, Pierre;Lee, Sang-Goog
    • Journal of Sensor Science and Technology
    • /
    • v.7 no.2
    • /
    • pp.124-130
    • /
    • 1998
  • High-speed 3D vision systems are essential for autonomous robot or vehicle control applications. In our study, a stereo vision process has been developed. It consists of three steps : extraction of edges in right and left images, matching corresponding edges and calculation of the 3D map. This process is implemented in a VME 150/40 Imaging Technology vision system. It is a modular system composed by a display, an acquisition, a four Mbytes image frame memory, and three computational cards. Programmable accelerator computational modules are running at 40 MHz and are based on TMS320C31 DSP with a $64{\times}32$ bit instruction cache and two $1024{\times}32$ bit internal RAMs. Each is equipped with 512 Kbytes static RAM, 4 Mbytes image memory, 1 Mbytes flash EEPROM and a serial port. Data transfers and communications between modules are provided by three 8 bit global video bus, and three local configurable pipeline 8 bit video bus. The VME bus is dedicated to system management. Tasks between DSPs are distributed as follows: two DSPs are used to edges detection, one for the right image and the other for the left one. The last processor computes the matching process and the 3D calculation. With $512{\times}512$ pixels images, this sensor generates dense 3D maps at a rate of about 1 Hz depending of the scene complexity. Results can surely be improved by using a special suited multiprocessors cards.

  • PDF

Anomaly Event Detection Algorithm of Single-person Households Fusing Vision, Activity, and LiDAR Sensors

  • Lee, Do-Hyeon;Ahn, Jun-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.6
    • /
    • pp.23-31
    • /
    • 2022
  • Due to the recent outbreak of COVID-19 and an aging population and an increase in single-person households, the amount of time that household members spend doing various activities at home has increased significantly. In this study, we propose an algorithm for detecting anomalies in members of single-person households, including the elderly, based on the results of human movement and fall detection using an image sensor algorithm through home CCTV, an activity sensor algorithm using an acceleration sensor built into a smartphone, and a 2D LiDAR sensor-based LiDAR sensor algorithm. However, each single sensor-based algorithm has a disadvantage in that it is difficult to detect anomalies in a specific situation due to the limitations of the sensor. Accordingly, rather than using only a single sensor-based algorithm, we developed a fusion method that combines each algorithm to detect anomalies in various situations. We evaluated the performance of algorithms through the data collected by each sensor, and show that even in situations where only one algorithm cannot be used to detect accurate anomaly event through certain scenarios we can complement each other to efficiently detect accurate anomaly event.

Multi-point displacement monitoring of bridges using a vision-based approach

  • Ye, X.W.;Yi, Ting-Hua;Dong, C.Z.;Liu, T.;Bai, H.
    • Wind and Structures
    • /
    • v.20 no.2
    • /
    • pp.315-326
    • /
    • 2015
  • To overcome the drawbacks of the traditional contact-type sensor for structural displacement measurement, the vision-based technology with the aid of the digital image processing algorithm has received increasing concerns from the community of structural health monitoring (SHM). The advanced vision-based system has been widely used to measure the structural displacement of civil engineering structures due to its overwhelming merits of non-contact, long-distance, and high-resolution. However, seldom currently-available vision-based systems are capable of realizing the synchronous structural displacement measurement for multiple points on the investigated structure. In this paper, the method for vision-based multi-point structural displacement measurement is presented. A series of moving loading experiments on a scale arch bridge model are carried out to validate the accuracy and reliability of the vision-based system for multi-point structural displacement measurement. The structural displacements of five points on the bridge deck are measured by the vision-based system and compared with those obtained by the linear variable differential transformer (LVDT). The comparative study demonstrates that the vision-based system is deemed to be an effective and reliable means for multi-point structural displacement measurement.

Vision-based Localization for AUVs using Weighted Template Matching in a Structured Environment (구조화된 환경에서의 가중치 템플릿 매칭을 이용한 자율 수중 로봇의 비전 기반 위치 인식)

  • Kim, Donghoon;Lee, Donghwa;Myung, Hyun;Choi, Hyun-Taek
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.19 no.8
    • /
    • pp.667-675
    • /
    • 2013
  • This paper presents vision-based techniques for underwater landmark detection, map-based localization, and SLAM (Simultaneous Localization and Mapping) in structured underwater environments. A variety of underwater tasks require an underwater robot to be able to successfully perform autonomous navigation, but the available sensors for accurate localization are limited. A vision sensor among the available sensors is very useful for performing short range tasks, in spite of harsh underwater conditions including low visibility, noise, and large areas of featureless topography. To overcome these problems and to a utilize vision sensor for underwater localization, we propose a novel vision-based object detection technique to be applied to MCL (Monte Carlo Localization) and EKF (Extended Kalman Filter)-based SLAM algorithms. In the image processing step, a weighted correlation coefficient-based template matching and color-based image segmentation method are proposed to improve the conventional approach. In the localization step, in order to apply the landmark detection results to MCL and EKF-SLAM, dead-reckoning information and landmark detection results are used for prediction and update phases, respectively. The performance of the proposed technique is evaluated by experiments with an underwater robot platform in an indoor water tank and the results are discussed.

Collision Avoidance for Indoor Mobile Robotics using Stereo Vision Sensor (스테레오 비전 센서를 이용한 실내 모바일 로봇 충돌 회피)

  • Kwon, Ki-Hyeon;Nam, Si-Byung;Lee, Se-Hun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.14 no.5
    • /
    • pp.2400-2405
    • /
    • 2013
  • We detect the obstacle for the UGV(unmanned ground vehicle) from the compound image which is generated by stereo vision sensor masking the depth image and color image. Stereo vision sensor can gathers the distance information by stereo camera. The obstacle information from the depth compound image can be send to mobile robot and the robot can localize the indoor area. And, we test the performance of the mobile robot in terms of distance between the obstacle and the robot's position and also test the color, depth and compound image respectively. Moreover, we test the performance in terms of number of frame per second which is processed by operating machine. From the result, compound image shows the improved performance in distance and number of frames.