• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.028 seconds

Design and Implementation of Automatic Detection Method of Corners of Grid Pattern from Distortion Corrected Image (왜곡보정 영상에서의 그리드 패턴 코너의 자동 검출 방법의 설계 및 구현)

  • Cheon, Sweung-Hwan;Jang, Jong-Wook;Jang, Si-Woong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.11
    • /
    • pp.2645-2652
    • /
    • 2013
  • For a variety of vision systems such as car omni-directional surveillance systems and robot vision systems, many cameras have been equipped and used. In order to detect corners of grid pattern in AVM(Around View Monitoring) systems, after the non-linear radial distortion image obtained from wide-angle camera is corrected, corners of grids of the distortion corrected image must be detected. Though there are transformations such as Sub-Pixel and Hough transformation as corner detection methods for AVM systems, it is difficult to achieve automatic detection by Sub-Pixel and accuracy by Hough transformation. Therefore, we showed that the automatic detection proposed in this paper, which detects corners accurately from the distortion corrected image could be applied for AVM systems, by designing and implementing it, and evaluating its performance.

Modular Neural Network Recognition System for Robot Endeffector Recognition (로봇 Endeffector 인식을 위한 다중 모듈 신경회로망 인식 시스템)

  • 신진욱;박동선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.618-626
    • /
    • 2004
  • In this paper, we describe a robot endeffector recognition system based on a Modular Neural Networks (MNN). The proposed recognition system can be used for vision system which track a given object using a sequence of images from a camera unit. The main objective to achieve with the designed MNN is to precisely recognize the given robot endeffector and to minimize the processing time. Since the robot endeffector can be viewed in many different shapes in 3- D space, a MNN structure, which contains a set of feedforwared neural networks, can be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training MNN patterns for a neural network share the similar characteristics so that they can be easily trained. The trained UM is les s sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

Study on the Remote Controllability of Vision Based Unmanned Vehicle Using Virtual Unmanned Vehicle Driving Simulator (가상 무인 차량 시뮬레이터를 이용한 영상 기반 무인 차량의 원격 조종성 연구)

  • Kim, Sunwoo;Han, Jong-Boo;Kim, Sung-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.40 no.5
    • /
    • pp.525-530
    • /
    • 2016
  • In this paper, we proposed an image shaking index to evaluate the remote controllability of vision based unmanned vehicles. To analyze the usefulness of the proposed image-shaking index, we perform subjective tests using a virtual unmanned vehicle driving simulator. The developed driving simulator consists of a real-time multibody dynamic software of the unmanned vehicle, a motion simulator, and a driver console. We perform dynamic simulations to obtain the motion of the unmanned vehicle running on the various road surfaces such as ISO roughness level A~E roads. The motion of the vehicle body is reflected in the motion simulator. Then, to enable remote control operation, we offer to operators the image data that was measured using the camera sensor on the simulator. We verify the usefulness of the proposed image-shaking index compared with subjective index provided by operators.

Audio-Visual Fusion for Sound Source Localization and Improved Attention (음성-영상 융합 음원 방향 추정 및 사람 찾기 기술)

  • Lee, Byoung-Gi;Choi, Jong-Suk;Yoon, Sang-Suk;Choi, Mun-Taek;Kim, Mun-Sang;Kim, Dai-Jin
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.35 no.7
    • /
    • pp.737-743
    • /
    • 2011
  • Service robots are equipped with various sensors such as vision camera, sonar sensor, laser scanner, and microphones. Although these sensors have their own functions, some of them can be made to work together and perform more complicated functions. AudioFvisual fusion is a typical and powerful combination of audio and video sensors, because audio information is complementary to visual information and vice versa. Human beings also mainly depend on visual and auditory information in their daily life. In this paper, we conduct two studies using audioFvision fusion: one is on enhancing the performance of sound localization, and the other is on improving robot attention through sound localization and face detection.

Aesthetic Strategies in Steina and Woody Vasulka's Video Art (비디오아티스트 슈테이너 바술카와 우디 바술카의 미적 전략)

  • Lim, Shan
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.3
    • /
    • pp.261-266
    • /
    • 2020
  • As pioneers of the early video art, Steina Vasulka(1940-) and Woody Vasulka(1937-2019) had lead not only their own experimental arts, but also entire changes of contemporary avant-garde performance, music, and visual art. Two artists invented and developed electronic machines for video image-processing by collaborating with engineers, and performed creative experiment on transformation of digital image. For them, video art is not just a means of documentation. The Vasulkas' artistic practices were not bounded by conventional canons and rules in art world, and preferably were parts of active aesthetic strategies for coexistence of vision of human and vision of machine. Particularly, their video art recognized the video as the key medium in an era where media technology began to dominate the system of communication, and established artist's authority over manipulation of moving image electronically without depending on video camera. In that regard, we can value on their video art. Therefore, the paper reflects on the Vasulkas' art and life which have not yet been studied, and suggests academic interests in the context of their artistic activities and aesthetic strategies.

Evaluation of orthodontics for treating temporo-mandibular joint disorders using a stereo camera (스테레오 카메라를 이용한 측두하악관절 교정장치(NO SICK)의 성능 평가)

  • Yun, Hong-Ii;Park, Joon-Su;Chung, Koo-Yeong;Shin, Ki-Young;Park, Joon-Ki
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.5
    • /
    • pp.359-366
    • /
    • 2015
  • TMJ(TemporoMandibular Joint) is considered as the most important articulation in human body for maintaining the balance. Thus it is one of the main treatment areas in Chiropractic. Instead of Chiropractic treatment, NOSICK, a TMJ balancing device, can be used. As there is no such device to quantify the effect of NOSICK, a system was developed to measure the effect of NOSICK. This system is composed of stereo vision, infrared lights, and infrared through filter, etc. It requires optical markers for the measurement. 8 land markers were selected from the face which will show different displacement as NOSICK is applied. 11 test subjects were measured with the system developed with and without NOSICK applied. Quantifiable displacement of markers before and after applying NOSICK was successfully measured with the system developed.

Cooperative UAV/UGV Platform for a Wide Range of Visual Information (광범위 시야 정보를 위한 UAV와 UGV의 협업 연구)

  • Lee, Jae-Keun;Jung, Hahmin;Kim, Dong Hun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.225-232
    • /
    • 2014
  • In this study, a cooperative UAV and UGV platform is proposed to obtain a wide range of visual information. The UAV recognizes a pattern marker on UGV and tracks the UGV without user control. It can provide wide range of visual information for a user in the UGV. The UGV by a user is controled equipped with an aluminum board. And the UAV can take off and land on the UGV. The UAV uses two cameras; one camera is used to recognize a pattern marker and another is used to provide a wide range of visual information to the UGV's user. It is guaranteed that the proposed visual-based approach detects and tracks the target marker on the UGV, and then lands well. The experimental results show that the proposed approach can effectively construct a cooperative UAV/UGV platform for obtaining a wide range of vision information.

A Study on Development of the Optimization Algorithms to Find the Seam Tracking (용접선 추적을 위한 최적화 알고리즘 개발에 관한 연구)

  • Jin, Byeong-Ju;Lee, Jong-Pyo;Park, Min-Ho;Kim, Do-Hyeong;Wu, Qian-Qian;Kim, Il-Soo;Son, Joon-Sik
    • Journal of Welding and Joining
    • /
    • v.34 no.2
    • /
    • pp.59-66
    • /
    • 2016
  • The Gas Metal Arc(GMA) welding, called Metal Inert Gas(MIG) welding, has been an important component in manufacturing industries. A key technology for robotic welding processes is seam tracking system, which is critical to improve the welding quality and welding capacities. The objectives of this study were to develop the intelligent and cost-effective algorithms for image processing in GMA welding which based on the laser vision sensor. Welding images were captured from the CCD camera and then processed by the proposed algorithm to track the weld joint location. The proposed algorithms that commonly used at the present stage were verified and compared to obtain the optimal one for each step in image processing. Finally, validity of the proposed algorithms was examined by using weld seam images obtained with different welding environments for image processing. The results proved that the proposed algorithm was quite excellent in getting rid of the variable noises to extract the feature points and centerline for seam tracking in GMA welding and could be employed for general industrial application.

Design of Computer Vision Interface by Recognizing Hand Motion (손동작 인식에 의한 컴퓨터 비전 인터페이스 설계)

  • Yun, Jin-Hyun;Lee, Chong-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.3
    • /
    • pp.1-10
    • /
    • 2010
  • As various interfacing devices for computational machines are being developed, a new HCI method using hand motion input is introduced. This interface method is a vision-based approach using a single camera for detecting and tracking hand movements. In the previous researches, only a skin color is used for detecting and tracking hand location. However, in our design, skin color and shape information are collectively considered. Consequently, detection ability of a hand increased. we proposed primary orientation edge descriptor for getting an edge information. This method uses only one hand model. Therefore, we do not need training processing time. This system consists of a detecting part and a tracking part for efficient processing. In tracking part, the system is quite robust on the orientation of the hand. The system is applied to recognize a hand written number in script style using DNAC algorithm. Performance of the proposed algorithm reaches 82% recognition ratio in detecting hand region and 90% in recognizing a written number in script style.

Studies of vision monitoring system using a background separation algorithm during radiotherapy (방사선 치료시 배경분리알고리즘을 이용한 비젼모니터링 시스템에 대한 연구)

  • Park, Kiyong;Choi, Jaehyun;Park, Jeawon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.2
    • /
    • pp.359-366
    • /
    • 2016
  • The normal tissue in radiation therapy, to minimize radiation, it is most important to maximize local tumor control rates in intensive research the exact dose to the tumor sites. Therefore, the initial, therapist accuracy of detecting movement of the patient fatigue therapist has been a problem that is weighted down directly. Also, by using a web camera, a difference value between the image to be updated to the reference image is calculated, if the result exceeds the reference value, using the system for determining the motion has occurred. However, this system, it is not possible to quantitatively analyze the movement of the patient, the background is changed when moving the treatment bed in the co-therapeutic device was not able to sift the patient. In this paper, using a alpah(${\alpha}$) filter index is an attempt to solve these limitations points, quantifies the movement of the patient, by separating a background image of the patient and treatment environment, and movement of the patient during treatment It senses only, it was possible to reduce the problems due to patient movement.