• Title/Summary/Keyword: Real-Time Computer Vision

Search Result 361, Processing Time 0.03 seconds

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Design of Autonomous Stair Robot System (자율주행 형 계단 승하강용 로봇 시스템 설계)

  • 홍영호;김동환;임충혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.73-81
    • /
    • 2003
  • An autonomous stair robot recognizing the stair, and climbing up and down the stair by utilizing a robot vision, photo sensors, and appropriate climbing algorithm is introduced. Four arms associated with four wheels make the robot climb up and down more safely and faster than a simple track typed robot. The robot can adjust wheel base according to the stair width, hence it can adopt to a variable width stair with different algorithms in climbing up and down. The command and image data acquired from the robot are transferred to the main computer through RF wireless modules, and the data are delivered to a remote computer via a network communication through a proper data compression, thus, the real time image monitoring is implemented effectively.

PCB Defects Detection using Connected Component Classification (연결 성분 분류를 이용한 PCB 결함 검출)

  • Jung, Min-Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.1
    • /
    • pp.113-118
    • /
    • 2011
  • This paper proposes computer visual inspection algorithms for PCB defects which are found in a manufacturing process. The proposed method can detect open circuit and short circuit on bare PCB without using any reference images. It performs adaptive threshold processing for the ROI (Region of Interest) of a target image, median filtering to remove noises, and then analyzes connected components of the binary image. In this paper, the connected components of circuit pattern are defined as 6 types. The proposed method classifies the connected components of the target image into 6 types, and determines an unclassified component as a defect of the circuit. The analysis of the original target image detects open circuits, while the analysis of the complement image finds short circuits. The machine vision inspection system is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiment results show that the proposed algorithms are quite successful.

Real-Time Fire Detection Method Using YOLOv8 (YOLOv8을 이용한 실시간 화재 검출 방법)

  • Tae Hee Lee;Chun-Su Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.77-80
    • /
    • 2023
  • Since fires in uncontrolled environments pose serious risks to society and individuals, many researchers have been investigating technologies for early detection of fires that occur in everyday life. Recently, with the development of deep learning vision technology, research on fire detection models using neural network backbones such as Transformer and Convolution Natural Network has been actively conducted. Vision-based fire detection systems can solve many problems with physical sensor-based fire detection systems. This paper proposes a fire detection method using the latest YOLOv8, which improves the existing fire detection method. The proposed method develops a system that detects sparks and smoke from input images by training the Yolov8 model using a universal fire detection dataset. We also demonstrate the superiority of the proposed method through experiments by comparing it with existing methods.

  • PDF

AdaBoost-based Real-Time Face Detection & Tracking System (AdaBoost 기반의 실시간 고속 얼굴검출 및 추적시스템의 개발)

  • Kim, Jeong-Hyun;Kim, Jin-Young;Hong, Young-Jin;Kwon, Jang-Woo;Kang, Dong-Joong;Lho, Tae-Jung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.11
    • /
    • pp.1074-1081
    • /
    • 2007
  • This paper presents a method for real-time face detection and tracking which combined Adaboost and Camshift algorithm. Adaboost algorithm is a method which selects an important feature called weak classifier among many possible image features by tuning weight of each feature from learning candidates. Even though excellent performance extracting the object, computing time of the algorithm is very high with window size of multi-scale to search image region. So direct application of the method is not easy for real-time tasks such as multi-task OS, robot, and mobile environment. But CAMshift method is an improvement of Mean-shift algorithm for the video streaming environment and track the interesting object at high speed based on hue value of the target region. The detection efficiency of the method is not good for environment of dynamic illumination. We propose a combined method of Adaboost and CAMshift to improve the computing speed with good face detection performance. The method was proved for real image sequences including single and more faces.

Real-Time Earlobe Detection System on the Web

  • Kim, Jaeseung;Choi, Seyun;Lee, Seunghyun;Kwon, Soonchul
    • International journal of advanced smart convergence
    • /
    • v.10 no.4
    • /
    • pp.110-116
    • /
    • 2021
  • This paper proposed a real-time earlobe detection system using deep learning on the web. Existing deep learning-based detection methods often find independent objects such as cars, mugs, cats, and people. We proposed a way to receive an image through the camera of the user device in a web environment and detect the earlobe on the server. First, we took a picture of the user's face with the user's device camera on the web so that the user's ears were visible. After that, we sent the photographed user's face to the server to find the earlobe. Based on the detected results, we printed an earring model on the user's earlobe on the web. We trained an existing YOLO v5 model using a dataset of about 200 that created a bounding box on the earlobe. We estimated the position of the earlobe through a trained deep learning model. Through this process, we proposed a real-time earlobe detection system on the web. The proposed method showed the performance of detecting earlobes in real-time and loading 3D models from the web in real-time.

Smart Vision Sensor for Satellite Video Surveillance Sensor Network (위성 영상감시 센서망을 위한 스마트 비젼 센서)

  • Kim, Won-Ho;Im, Jae-Yoo
    • Journal of Satellite, Information and Communications
    • /
    • v.10 no.2
    • /
    • pp.70-74
    • /
    • 2015
  • In this paper, satellite communication based video surveillance system that consisted of ultra-small aperture terminals with small-size smart vision sensor is proposed. The events such as forest fire, smoke, intruder movement are detected automatically in field and false alarms are minimized by using intelligent and high-reliable video analysis algorithms. The smart vision sensor is necessary to achieve high-confidence, high hardware endurance, seamless communication and easy maintenance requirements. To satisfy these requirements, real-time digital signal processor, camera module and satellite transceiver are integrated as a smart vision sensor-based ultra-small aperture terminal. Also, high-performance video analysis and image coding algorithms are embedded. The video analysis functions and performances were verified and confirmed practicality through computer simulation and vision sensor prototype test.

Development of Vision System Model for Manipulator's Assemble task (매니퓰레이터의 조립작업을 위한 비젼시스템 모델 개발)

  • 장완식
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.6 no.2
    • /
    • pp.10-18
    • /
    • 1997
  • This paper presents the development of real-time estimation and control details for a computer vision-based robot control method. This is accomplished using a sequential estimation scheme that permits placement of these points in each of the two-dimensional image planes of monitoring cameras. Estimation model is developed based on a model that generalizes know 4-axis Scorbot manipulator kinematics to accommodate unknown relative camera position and orientation, etc. This model uses six uncertainty-of-view parameters estimated by the iteration method. The method is tested experimentally in two ways : First the validity of estimation model is tested by using the self-built test model. Second, the practicality of the presented control method is verified in performing 4-axis manipulator's assembly task. These results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as deburring and welding.

  • PDF

A Study on the Vision-Based Inspection System for Ball-Stud (비전을 이용한 볼-스터드 검사 시스템에 관한 연구)

  • 장영훈;권태종;한창수;문영식
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.12
    • /
    • pp.7-13
    • /
    • 1998
  • In this paper, an automatic ball-stud inspection system has been developed using the computer-aided vision system. Index table has been used to get the rapid measurement and multi-camera has been used to get the high resolution in physical system. Camera calibration was suggested to perform the reliable inspection. Image processing and data analysis algorithms for ball stud inspection system have been investigated and were performed quickly with high accuracy. As a result, inspection system of a ball stud could be used with a high resolution in real time.

  • PDF