• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.028 seconds

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

THE DEVELOPMENT OF THE NARROW GAP MULTI-PASS WELDING SYSTEM USING LASER VISION SYSTEM

  • Park, Hee-Chang;Park, Young-Jo;Song, Keun-Ho;Lee, Jae-Woong;Jung, Yung-Hwa;Luc Didier
    • Proceedings of the KWS Conference
    • /
    • 2002.10a
    • /
    • pp.706-713
    • /
    • 2002
  • In the multi-pass welding of pressure vessels or ships, the mechanical touch sensor system is generally used together with a manipulator to measure the gap and depth of the narrow gap to perform seam tracking. Unfortunately, such mechanical touch sensors may commit measuring errors caused by the eterioration of the measuring device. An automation system of narrow gap multi-pass welding using a laser vision system which can track the seam line of narrow gap and which can control welding power has been developed. The joint profile of the narrow gap, with 250mm depth and 28mm width, can be captured by laser vision camera. The image is then processed for defining tracking positions of the torch during welding. Then, the real-time correction of lateral and vertical position of the torch can be done by the laser vision system. The adaptive control of welding conditions like welding Currents and welding speeds, can also be performed by the laser vision system, which cannot be done by conventional mechanical touch systems. The developed automation system will be adopted to reduce the idle time of welders, which happens frequently in conventional long welding processes, and to improve the reliability of the weld quality as well.

  • PDF

Evaluation of Robot Vision Control Scheme Based on EKF Method for Slender Bar Placement in the Appearance of Obstacles (장애물 출현 시 얇은 막대 배치작업에 대한 EKF 방법을 이용한 로봇 비젼제어기법 평가)

  • Hong, Sung-Mun;Jang, Wan-Shik;Kim, Jae-Meung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.32 no.5
    • /
    • pp.471-481
    • /
    • 2015
  • This paper presents the robot vision control schemes using Extended Kalman Filter (EKF) method for the slender bar placement in the appearance of obstacles during robot movement. The vision system model used for this study involves the six camera parameters($C_1{\sim}C_6$). In order to develop the robot vision control scheme, first, the six parameters are estimated. Then, based on the estimated parameters, the robot's joint angles are estimated for the slender bar placement. Especially, robot trajectory caused by obstacles is divided into three obstacle regions, which are beginning region, middle region and near target region. Finally, the effects of number of obstacles using the proposed robot's vision control schemes are investigated in each obstacle region by performing experiments of the slender bar placement.

Development of a Vision-based Blank Alignment Unit for Press Automation Process (프레스 자동화 공정을 위한 비전 기반 블랭크 정렬 장치 개발)

  • Oh, Jong-Kyu;Kim, Daesik;Kim, Soo-Jong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.1
    • /
    • pp.65-69
    • /
    • 2015
  • A vision-based blank alignment unit for a press automation line is introduced in this paper. A press is a machine tool that changes the shape of a blank by applying pressure and is widely used in industries requiring mass production. In traditional press automation lines, a mechanical centering unit, which consists of guides and ball bearings, is employed to align a blank before a robot inserts it into the press. However it can only align limited sized and shaped of blanks. Moreover it cannot be applied to a process where more than two blanks are simultaneously inserted. To overcome these problems, we developed a press centering unit by means of vision sensors for press automation lines. The specification of the vision system is determined by considering information of the blank and the required accuracy. A vision application S/W with pattern recognition, camera calibration and monitoring functions is designed to successfully detect multiple blanks. Through real experiments with an industrial robot, we validated that the proposed system was able to align various sizes and shapes of blanks, and successfully detect more than two blanks which were simultaneously inserted.

The Development of the Narrow Gap Multi-Pass Welding System Using Laser Vision System

  • Park, H.C.;Park, Y.J.;Song, K.H.;Lee, J.W.;Jung, Y.H.;Didier, L.
    • International Journal of Korean Welding Society
    • /
    • v.2 no.1
    • /
    • pp.45-51
    • /
    • 2002
  • In the multi-pass welding of pressure vessels or ships, the mechanical touch sensor system is generally used together with a manipulator to measure the gap and depth of the narrow gap to perform seam tracking. Unfortunately, such mechanical touch sensors may commit measuring errors caused by the deterioration of the measuring device. An automation system of narrow gap multi-pass welding using a laser vision system which can track the seam line of narrow gap and which can control welding power has been developed. The joint profile of the narrow gap, with 250mm depth and 28mm width, can be captured by laser vision camera. The image is then processed for defining tracking positions of the torch during welding. Then, the real-time correction of lateral and vertical position of the torch can be done by the laser vision system. The adaptive control of welding conditions like welding currents and welding speeds, can also be performed by the laser vision system, which cannot be done by conventional mechanical touch systems. The developed automation system will be adopted to reduce the idle time of welders, which happens frequently in conventional long welding processes, and to improve the reliability of the weld quality as well.

  • PDF

Secure Internet of Things Based Human Detection in Computer Vision

  • Fatima Ashraf;Sheraz Arshad Malik;Muhammad Ayub Sabir
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.10
    • /
    • pp.154-158
    • /
    • 2024
  • Billions of the objects around us are transformed to the IoT device by connecting them with the internet and control in that way of collecting and sharing data. Privacy is required to keep the data save from the security attacks in internet of things. Computer vision is used for monitoring the people. Computer vision algorithms, application and tools are primarily used in IOT for human movement's analysis. Traditional system and algorithms are unable to detect the human in a perfect manner. Use of the thermal camera is degraded the movements of human detection. In this paper we propose a new IoT system that is combined with the latest feature of computer vision to detect the position using computer vision. It is a useful technology that helps to keep an eye on your house and office. It will alert you if anybody enters your home or office and do any harm at your place. For that purpose, the credit card size Raspberry PI card will be used. Histogram of oriented gradient (HOG) algorithm is used to detect the person in the image.

Development of Vision Control Scheme of Extended Kalman filtering for Robot's Position Control (실시간 로봇 위치 제어를 위한 확장 칼만 필터링의 비젼 저어 기법 개발)

  • Jang, W.S.;Kim, K.S.;Park, S.I.;Kim, K.Y.
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.1
    • /
    • pp.21-29
    • /
    • 2003
  • It is very important to reduce the computational time in estimating the parameters of vision control algorithm for robot's position control in real time. Unfortunately, the batch estimation commonly used requires too murk computational time because it is iteration method. So, the batch estimation has difficulty for robot's position control in real time. On the other hand, the Extended Kalman Filtering(EKF) has many advantages to calculate the parameters of vision system in that it is a simple and efficient recursive procedures. Thus, this study is to develop the EKF algorithm for the robot's vision control in real time. The vision system model used in this study involves six parameters to account for the inner(orientation, focal length etc) and outer (the relative location between robot and camera) parameters of camera. Then, EKF has been first applied to estimate these parameters, and then with these estimated parameters, also to estimate the robot's joint angles used for robot's operation. finally, the practicality of vision control scheme based on the EKF has been experimentally verified by performing the robot's position control.

Design of Optimized pRBFNNs-based Night Vision Face Recognition System Using PCA Algorithm (PCA알고리즘을 이용한 최적 pRBFNNs 기반 나이트비전 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Jang, Byoung-Hee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.1
    • /
    • pp.225-231
    • /
    • 2013
  • In this study, we propose the design of optimized pRBFNNs-based night vision face recognition system using PCA algorithm. It is difficalt to obtain images using CCD camera due to low brightness under surround condition without lighting. The quality of the images distorted by low illuminance is improved by using night vision camera and histogram equalization. Ada-Boost algorithm also is used for the detection of face image between face and non-face image area. The dimension of the obtained image data is reduced to low dimension using PCA method. Also we introduce the pRBFNNs as recognition module. The proposed pRBFNNs consists of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned by using Fuzzy C-Means clustering. In the conclusion part of rules, the connection weights of pRBFNNs is represented as three kinds of polynomials such as linear, quadratic, and modified quadratic. The essential design parameters of the networks are optimized by means of Differential Evolution.

Bayesian Sensor Fusion of Monocular Vision and Laser Structured Light Sensor for Robust Localization of a Mobile Robot (이동 로봇의 강인 위치 추정을 위한 단안 비젼 센서와 레이저 구조광 센서의 베이시안 센서융합)

  • Kim, Min-Young;Ahn, Sang-Tae;Cho, Hyung-Suck
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.4
    • /
    • pp.381-390
    • /
    • 2010
  • This paper describes a procedure of the map-based localization for mobile robots by using a sensor fusion technique in structured environments. A combination of various sensors with different characteristics and limited sensibility has advantages in view of complementariness and cooperation to obtain better information on the environment. In this paper, for robust self-localization of a mobile robot with a monocular camera and a laser structured light sensor, environment information acquired from two sensors is combined and fused by a Bayesian sensor fusion technique based on the probabilistic reliability function of each sensor predefined through experiments. For the self-localization using the monocular vision, the robot utilizes image features consisting of vertical edge lines from input camera images, and they are used as natural landmark points in self-localization process. However, in case of using the laser structured light sensor, it utilizes geometrical features composed of corners and planes as natural landmark shapes during this process, which are extracted from range data at a constant height from the navigation floor. Although only each feature group of them is sometimes useful to localize mobile robots, all features from the two sensors are simultaneously used and fused in term of information for reliable localization under various environment conditions. To verify the advantage of using multi-sensor fusion, a series of experiments are performed, and experimental results are discussed in detail.

A Study on the Development of a Robot Vision Control Scheme Based on the Newton-Raphson Method for the Uncertainty of Circumstance (불확실한 환경에서 N-R방법을 이용한 로봇 비젼 제어기법 개발에 대한 연구)

  • Jang, Min Woo;Jang, Wan Shik;Hong, Sung Mun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.3
    • /
    • pp.305-315
    • /
    • 2016
  • This study aims to develop a robot vision control scheme using the Newton-Raphson (N-R) method for the uncertainty of circumstance caused by the appearance of obstacles during robot movement. The vision system model used for this study involves six camera parameters (C1-C6). First, the estimation scheme for the six camera parameters is developed. Then, based on the six estimated parameters for three of the cameras, a scheme for the robot's joint angles is developed for the placement of a slender bar. For the placement of a slender bar for the uncertainty of circumstances, in particular, the discontinuous robot trajectory caused by obstacles is divided into three obstacle regions: the beginning region, middle region, and near-target region. Then, the effects of obstacles while using the proposed robot vision control scheme are investigated in each obstacle region by performing experiments with the placement of the slender bar.