• Title/Summary/Keyword: Real-Time Computer Vision

Search Result 356, Processing Time 0.028 seconds

Real-Time Facial Recognition Using the Geometric Informations

  • Lee, Seong-Cheol;Kang, E-Sok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.55.3-55
    • /
    • 2001
  • The implementation of human-like robot has been advanced in various parts such as mechanic arms, legs, and applications of five senses. The vision applications have been developed in several decades and especially the face recognition have become a prominent issue. In addition, the development of computer systems makes it possible to process complex algorithms in realtime. The most of human recognition systems adopt the discerning method using fingerprint, iris, and etc. These methods restrict the motion of the person to be discriminated. Recently, the researchers of human recognition systems are interested in facial recognition by using machine vision. Thus, the object of this paper is the implementation of the realtime ...

  • PDF

Smart Ship Container With M2M Technology (M2M 기술을 이용한 스마트 선박 컨테이너)

  • Sharma, Ronesh;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.38C no.3
    • /
    • pp.278-287
    • /
    • 2013
  • Modern information technologies continue to provide industries with new and improved methods. With the rapid development of Machine to Machine (M2M) communication, a smart container supply chain management is formed based on high performance sensors, computer vision, Global Positioning System (GPS) satellites, and Globle System for Mobile (GSM) communication. Existing supply chain management has limitation to real time container tracking. This paper focuses on the studies and implementation of real time container chain management with the development of the container identification system and automatic alert system for interrupts and for normal periodical alerts. The concept and methods of smart container modeling are introduced together with the structure explained prior to the implementation of smart container tracking alert system. Firstly, the paper introduces the container code identification and recognition algorithm implemented in visual studio 2010 with Opencv (computer vision library) and Tesseract (OCR engine) for real time operation. Secondly it discusses the current automatic alert system provided for real time container tracking and the limitations of those systems. Finally the paper summarizes the challenges and the possibilities for the future work for real time container tracking solutions with the ubiquitous mobile and satellite network together with the high performance sensors and computer vision. All of those components combine to provide an excellent delivery of supply chain management with outstanding operation and security.

The Camera Tracking of Real-Time Moving Object on UAV Using the Color Information (컬러 정보를 이용한 무인항공기에서 실시간 이동 객체의 카메라 추적)

  • Hong, Seung-Beom
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.18 no.2
    • /
    • pp.16-22
    • /
    • 2010
  • This paper proposes the real-time moving object tracking system UAV using color information. Case of object tracking, it have studied to recognizing the moving object or moving multiple objects on the fixed camera. And it has recognized the object in the complex background environment. But, this paper implements the moving object tracking system using the pan/tilt function of the camera after the object's region extraction. To do this tracking system, firstly, it detects the moving object of RGB/HSI color model and obtains the object coordination in acquired image using the compact boundary box. Secondly, the camera origin coordination aligns to object's top&left coordination in compact boundary box. And it tracks the moving object using the pan/tilt function of camera. It is implemented by the Labview 8.6 and NI Vision Builder AI of National Instrument co. It shows the good performance of camera trace in laboratory environment.

Vision Processing for Precision Autonomous Landing Approach of an Unmanned Helicopter (무인헬기의 정밀 자동착륙 접근을 위한 영상정보 처리)

  • Kim, Deok-Ryeol;Kim, Do-Myoung;Suk, Jin-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.1
    • /
    • pp.54-60
    • /
    • 2009
  • In this paper, a precision landing approach is implemented based on real-time image processing. A full-scale landmark for automatic landing is used. canny edge detection method is applied to identify the outside quadrilateral while circular hough transform is used for the recognition of inside circle. Position information on the ground landmark is uplinked to the unmanned helicopter via ground control computer in real time so that the unmanned helicopter control the air vehicle for accurate landing approach. Ground test and a couple of flight tests for autonomous landing approach show that the image processing and automatic landing operation system have good performance for the landing approach phase at the altitude of $20m{\sim}1m$ above ground level.

Representing Navigation Information on Real-time Video in Visual Car Navigation System

  • Joo, In-Hak;Lee, Seung-Yong;Cho, Seong-Ik
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.365-373
    • /
    • 2007
  • Car navigation system is a key application in geographic information system and telematics. A recent trend of car navigation system is using real video captured by camera equipped on the vehicle, because video has more representation power about real world than conventional map. In this paper, we suggest a visual car navigation system that visually represents route guidance. It can improve drivers' understanding about real world by capturing real-time video and displaying navigation information overlaid directly on the video. The system integrates real-time data acquisition, conventional route finding and guidance, computer vision, and augmented reality display. We also designed visual navigation controller, which controls other modules and dynamically determines visual representation methods of navigation information according to current location and driving circumstances. We briefly show implementation of the system.

Development of Grading and Sorting System of Dried Oak Mushrooms via Color Computer Vision System (컬러 컴퓨터시각에 의거한 건표고 등급 선별시스템 개발)

  • Kim, S.C.;Choi, D.Y.;Choi, S.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.32 no.2 s.121
    • /
    • pp.130-135
    • /
    • 2007
  • An on-line real time grading and sorting system for dried oak mushrooms was developed for on-site application. Quality grades of the mushrooms were determined according to an industrial specification. Three dimensional visual quality features were used for the grading. A progressive color computer vision system with white LED illumination was implemented to develop an algorithm to extract external quality patterns of the dried oak mushrooms. Cap (top) and gil (stem) surface images were acquired sequentially and side image was obtained using mirror. Algorithms for extracting size, roundness, pattern and color of the cap, thickness, color of the gil and amount of rolled edge of the dried mushroom were developed. Utilizing those quality factors normal and abnormal ones were classified and normal mushrooms were further classified into 30 different grades. The sorting device was developed using microprocessor controlled electro-pneumatic system with stainless buckets. Grading accuracy was around 97% and processing time was 0.4 s in average.

Spot insepction System for Camera Target Lens using the Computer Aided Vision System (비젼을 이용한 카메라 렌즈 이물질 검사 시스템 개발)

  • 이일환;안우정;박희재;황두현;김왕도
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1996.04a
    • /
    • pp.271-275
    • /
    • 1996
  • In this paper, an automatic spot inspection system has been developed for camera target lens using the computer aided vision system. The developed system comprises: light source, magnifying optics, vision camera, XY robot, and a PC. An efficient algotithm for the spot detection has been implemented, thus up tof ew micrometer size spots can be effectively identified in real time. The developed system has been fully interfaced with XY robot systenm, PLCs, thus the practical spot inspection system has been implemented. The system has been applied to a practical camera manufacturing process, and showed its efficiency.

  • PDF

Study on Machine Vision Algorithms for LCD Defects Detection (LCD 결함 검출을 위한 머신 비전 알고리즘 연구)

  • Jung, Min-Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.3
    • /
    • pp.59-63
    • /
    • 2010
  • This paper proposes computer visual inspection algorithms for various LCD defects which are found in a manufacturing process. Modular vision processing steps are required in order to detect different types of LCD defects. Those key modules include RGB filtering for pixel defects, gray-scale morphological processing and Hough transform for line defects, and adaptive threshold for spot defects. The proposed algorithms can give users detailed information on the type of defects in the LCD panel, the size of defect, and its location. The machine vision inspection system is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiment results show that the proposed algorithms are quite successful.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Text-To-Vision Player - Converting Text to Vision Based on TVML Technology -

  • Hayashi, Masaki
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.799-802
    • /
    • 2009
  • We have been studying the next generation of video creation solution based on TVML (TV program Making Language) technology. TVML is a well-known scripting language for computer animation and a TVML Player interprets the script to create video content using real-time 3DCG and synthesized voices. TVML has a long history proposed back in 1996 by NHK, however, the only available Player has been the one made by NHK for years. We have developed a new TVML Player from scratch and named it T2V (Text-To-Vision) Player. Due to the development from scratch, the code is compact, light and fast, and extendable and portable. Moreover, the new T2V Player performs not only a playback of TVML script but also a Text-To-Vision conversion from input written in XML format or just a mere plane text to videos by using 'Text-filter' that can be added as a plug-in of the Player. We plan to make it public as freeware from early 2009 in order to stimulate User-Generated-Content and a various kinds of services running on the Internet and media industry. We think that our T2V Player would be a key technology for upcoming new movement.

  • PDF