• Title/Summary/Keyword: Real-Time Computer Vision

Search Result 352, Processing Time 0.025 seconds

Development of an Intelligent Control System to Integrate Computer Vision Technology and Big Data of Safety Accidents in Korea

  • KANG, Sung Won;PARK, Sung Yong;SHIN, Jae Kwon;YOO, Wi Sung;SHIN, Yoonseok
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.721-727
    • /
    • 2022
  • Construction safety remains an ongoing concern, and project managers have been increasingly forced to cope with myriad uncertainties related to human operations on construction sites and the lack of a skilled workforce in hazardous circumstances. Various construction fatality monitoring systems have been widely proposed as alternatives to overcome these difficulties and to improve safety management performance. In this study, we propose an intelligent, automatic control system that can proactively protect workers using both the analysis of big data of past safety accidents, as well as the real-time detection of worker non-compliance in using personal protective equipment (PPE) on a construction site. These data are obtained using computer vision technology and data analytics, which are integrated and reinforced by lessons learned from the analysis of big data of safety accidents that occurred in the last 10 years. The system offers data-informed recommendations for high-risk workers, and proactively eliminates the possibility of safety accidents. As an illustrative case, we selected a pilot project and applied the proposed system to workers in uncontrolled environments. Decreases in workers PPE non-compliance rates, improvements in variable compliance rates, reductions in severe fatalities through guidelines that are customized according to the worker, and accelerations in safety performance achievements are expected.

  • PDF

Mapping of Real-Time 3D object movement

  • Tengis, Tserendondog;Batmunkh, Amar
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.2
    • /
    • pp.1-8
    • /
    • 2015
  • Tracking of an object in 3D space performed in real-time is a significant task in different domains from autonomous robots to smart vehicles. In traditional methods, specific data acquisition equipments such as radars, lasers etc, are used. Contemporary computer technology development accelerates image processing, and it results in three-dimensional stereo vision to be used for localizing and object tracking in space. This paper describes a system for tracking three dimensional motion of an object using color information in real time. We create stereo images using pair of a simple web camera, raw data of an object positions are collected under realistic noisy conditions. The system has been tested using OpenCV and Matlab and the results of the experiments are presented here.

Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing (비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정)

  • Cho, Jaemin;Kang, Sang Seung;Kim, Kye Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.

Real Time Recognition of Finger-Language Using Color Information and Fuzzy Clustering Algorithm

  • Kim, Kwang-Baek;Song, Doo-Heon;Woo, Young-Woon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.1
    • /
    • pp.19-22
    • /
    • 2010
  • A finger language helping hearing impaired people in communication A sign language helping hearing impaired people in communication is not popular to ordinary healthy people. In this paper, we propose a method for real-time sign language recognition from a vision system using color information and fuzzy clustering system. We use YCbCr color model and canny mask to decide the position of hands and the boundary lines. After extracting regions of two hands by applying 8-directional contour tracking algorithm and morphological information, the system uses FCM in classifying sign language signals. In experiment, the proposed method is proven to be sufficiently efficient.

Development of a Real-time Translation Application using Screen Capture and OCR in Android Environment (안드로이드 환경에서 화면 캡쳐와 OCR을 활용한 실시간 번역 애플리케이션 개발)

  • Seung-Woo Lee;Sung Jin Kim;Young Hyun Yoon;Jai Soon Baek
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.07a
    • /
    • pp.267-268
    • /
    • 2023
  • 본 논문은 안드로이드에서 화면 캡쳐와 OCR을 통한 실시간 번역 애플리케이션 개발을 주제로 한다. 코틀린으로 개발된 애플리케이션은 사용자가 원하는 화면 영역을 캡쳐하여 해당 텍스트를 OCR로 추출하고, 구글 Cloud Vision API와 Cloud Translation API를 활용해 번역한다. 이를 통해 외국어 애플리케이션 사용의 편의성을 향상시키고, 정보의 이해와 공유를 도울 수 있음을 제시한다. 이 기술은 더욱 다양한 분야에서의 활용 가능성을 열어놓고 있다.

  • PDF

Fine-tuning Neural Network for Improving Video Classification Performance Using Vision Transformer (Vision Transformer를 활용한 비디오 분류 성능 향상을 위한 Fine-tuning 신경망)

  • Kwang-Yeob Lee;Ji-Won Lee;Tae-Ryong Park
    • Journal of IKEEE
    • /
    • v.27 no.3
    • /
    • pp.313-318
    • /
    • 2023
  • This paper proposes a neural network applying fine-tuning as a way to improve the performance of Video Classification based on Vision Transformer. Recently, the need for real-time video image analysis based on deep learning has emerged. Due to the characteristics of the existing CNN model used in Image Classification, it is difficult to analyze the association of consecutive frames. We want to find and solve the optimal model by comparing and analyzing the Vision Transformer and Non-local neural network models with the Attention mechanism. In addition, we propose an optimal fine-tuning neural network model by applying various methods of fine-tuning as a transfer learning method. The experiment trained the model with the UCF101 dataset and then verified the performance of the model by applying a transfer learning method to the UTA-RLDD dataset.

Steering Gaze of a Camera in an Active Vision System: Fusion Theme of Computer Vision and Control (능동적인 비전 시스템에서 카메라의 시선 조정: 컴퓨터 비전과 제어의 융합 테마)

  • 한영모
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.41 no.4
    • /
    • pp.39-43
    • /
    • 2004
  • A typical theme of active vision systems is gaze-fixing of a camera. Here gaze-fixing of a camera means by steering orientation of a camera so that a given point on the object is always at the center of the image. For this we need to combine a function to analyze image data and a function to control orientation of a camera. This paper presents an algorithm for gaze-fixing of a camera where image analysis and orientation control are designed in a frame. At this time, for avoiding difficulties in implementing and aiming for real-time applications we design the algorithm to be a simple closed-form without using my information related to calibration of the camera or structure estimation.

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF

Design of Autonomous Stair Robot System (자율주행 형 계단 승하강용 로봇 시스템 설계)

  • 홍영호;김동환;임충혁
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.1
    • /
    • pp.73-81
    • /
    • 2003
  • An autonomous stair robot recognizing the stair, and climbing up and down the stair by utilizing a robot vision, photo sensors, and appropriate climbing algorithm is introduced. Four arms associated with four wheels make the robot climb up and down more safely and faster than a simple track typed robot. The robot can adjust wheel base according to the stair width, hence it can adopt to a variable width stair with different algorithms in climbing up and down. The command and image data acquired from the robot are transferred to the main computer through RF wireless modules, and the data are delivered to a remote computer via a network communication through a proper data compression, thus, the real time image monitoring is implemented effectively.

PCB Defects Detection using Connected Component Classification (연결 성분 분류를 이용한 PCB 결함 검출)

  • Jung, Min-Chul
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.1
    • /
    • pp.113-118
    • /
    • 2011
  • This paper proposes computer visual inspection algorithms for PCB defects which are found in a manufacturing process. The proposed method can detect open circuit and short circuit on bare PCB without using any reference images. It performs adaptive threshold processing for the ROI (Region of Interest) of a target image, median filtering to remove noises, and then analyzes connected components of the binary image. In this paper, the connected components of circuit pattern are defined as 6 types. The proposed method classifies the connected components of the target image into 6 types, and determines an unclassified component as a defect of the circuit. The analysis of the original target image detects open circuits, while the analysis of the complement image finds short circuits. The machine vision inspection system is implemented using C language in an embedded Linux system for a high-speed real-time image processing. Experiment results show that the proposed algorithms are quite successful.