• Title/Summary/Keyword: 비전 기반 추적

Search Result 135, Processing Time 0.028 seconds

Deep Learning Based Autonomous-Driving Cart Using ROS for Computation Offloading (컴퓨팅 계산 오프로딩 위해 ROS를 사용한 딥러닝 기반의 자율주행카트)

  • Han, Jisu;Park, Ji-Yoon;Kim, Chae-won;Park, Sang-soo;Kim, Hieonn
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.100-103
    • /
    • 2021
  • IoT 와 인공지능을 접하려는 시도는 최근 들어서 많은 발전을 보이고 있다. 본 논문은 컴퓨팅 파워가 제한되는 작은 디바이스 IoT 의 한계를 극복하기 위하여 ROS 를 이용하여 복잡한 연산을 무선 통신으로 오프로딩하는 기법을 제안한다. 제안된 자율주행카드 시스템은 카트 이용 고객 개개인을 검출하고 추적하되 컴퓨터 비전 알고리즘과 LiDAR 센서를 이용하며, 음성인식 알고리즘을 적용하여 기계와 인간의 감성공학적 소통이 가능한 융합형 자율주행카트를 구현한다.

Automatic Collection of Production Performance Data Based on Multi-Object Tracking Algorithms (다중 객체 추적 알고리즘을 이용한 가공품 흐름 정보 기반 생산 실적 데이터 자동 수집)

  • Lim, Hyuna;Oh, Seojeong;Son, Hyeongjun;Oh, Yosep
    • The Journal of Society for e-Business Studies
    • /
    • v.27 no.2
    • /
    • pp.205-218
    • /
    • 2022
  • Recently, digital transformation in manufacturing has been accelerating. It results in that the data collection technologies from the shop-floor is becoming important. These approaches focus primarily on obtaining specific manufacturing data using various sensors and communication technologies. In order to expand the channel of field data collection, this study proposes a method to automatically collect manufacturing data based on vision-based artificial intelligence. This is to analyze real-time image information with the object detection and tracking technologies and to obtain manufacturing data. The research team collects object motion information for each frame by applying YOLO (You Only Look Once) and DeepSORT as object detection and tracking algorithms. Thereafter, the motion information is converted into two pieces of manufacturing data (production performance and time) through post-processing. A dynamically moving factory model is created to obtain training data for deep learning. In addition, operating scenarios are proposed to reproduce the shop-floor situation in the real world. The operating scenario assumes a flow-shop consisting of six facilities. As a result of collecting manufacturing data according to the operating scenarios, the accuracy was 96.3%.

A Study on Abalone Young Shells Counting System using Machine Vision (머신비전을 이용한 전복 치패 계수에 관한 연구)

  • Park, Kyung-min;Ahn, Byeong-Won;Park, Young-San;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.23 no.4
    • /
    • pp.415-420
    • /
    • 2017
  • In this paper, an algorithm for object counting via a conveyor system using machine vision is suggested. Object counting systems using image processing have been applied in a variety of industries for such purposes as measuring floating populations and traffic volume, etc. The methods of object counting mainly used involve template matching and machine learning for detecting and tracking. However, operational time for these methods should be short for detecting objects on quickly moving conveyor belts. To provide this characteristic, this algorithm for image processing is a region-based method. In this experiment, we counted young abalone shells that are similar in shape, size and color. We applied a characteristic conveyor system that operated in one direction. It obtained information on objects in the region of interest by comparing a second frame that continuously changed according to the information obtained with reference to objects in the first region. Objects were counted if the information between the first and second images matched. This count was exact when young shells were evenly spaced without overlap and missed objects were calculated using size information when objects moved without extra space. The proposed algorithm can be applied for various object counting controls on conveyor systems.

Vision-based Real-time Vehicle Detection and Tracking Algorithm for Forward Collision Warning (전방 추돌 경보를 위한 영상 기반 실시간 차량 검출 및 추적 알고리즘)

  • Hong, Sunghoon;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.962-970
    • /
    • 2021
  • The cause of the majority of vehicle accidents is a safety issue due to the driver's inattention, such as drowsy driving. A forward collision warning system (FCWS) can significantly reduce the number and severity of accidents by detecting the risk of collision with vehicles in front and providing an advanced warning signal to the driver. This paper describes a low power embedded system based FCWS for safety. The algorithm computes time to collision (TTC) through detection, tracking, distance calculation for the vehicle ahead and current vehicle speed information with a single camera. Additionally, in order to operate in real time even in a low-performance embedded system, an optimization technique in the program with high and low levels will be introduced. The system has been tested through the driving video of the vehicle in the embedded system. As a result of using the optimization technique, the execution time was about 170 times faster than that when using the previous non-optimized process.

The Basic Position Tracking Technology of Power Connector Receptacle based on the Image Recognition (영상인식 기반 파워 컨넥터 리셉터클의 위치 확인을 위한 기초 연구)

  • Ko, Yun-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.2
    • /
    • pp.309-314
    • /
    • 2017
  • Recently, the fields such as the service robot, the autonomous driving electric car, and the torpedo ladle cars operated autonomously to enhance the efficiency of management of the steel mill are receiving great attention. But development of automatic power supply that doesn't need human intervention be a problem. In this paper, a position tracking technology of power connector receptacle based on the computer vision is studied which can recognize and identify the position of the power connector receptacle, and finally its possibility is verified using OpenCV program.

Gaze Detection by Computing Facial and Eye Movement (얼굴 및 눈동자 움직임에 의한 시선 위치 추적)

  • 박강령
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.2
    • /
    • pp.79-88
    • /
    • 2004
  • Gaze detection is to locate the position on a monitor screen where a user is looking by computer vision. Gaze detection systems have numerous fields of application. They are applicable to the man-machine interface for helping the handicapped to use computers and the view control in three dimensional simulation programs. In our work, we implement it with a computer vision system setting a IR-LED based single camera. To detect the gaze position, we locate facial features, which is effectively performed with IR-LED based camera and SVM(Support Vector Machine). When a user gazes at a position of monitor, we can compute the 3D positions of those features based on 3D rotation and translation estimation and affine transform. Finally, the gaze position by the facial movements is computed from the normal vector of the plane determined by those computed 3D positions of features. In addition, we use a trained neural network to detect the gaze position by eye's movement. As experimental results, we can obtain the facial and eye gaze position on a monitor and the gaze position accuracy between the computed positions and the real ones is about 4.8 cm of RMS error.

A Speaker Detection System based on Stereo Vision and Audio (스테레오 시청각 기반의 화자 검출 시스템)

  • An, Jun-Ho;Hong, Kwang-Seok
    • Journal of Internet Computing and Services
    • /
    • v.11 no.6
    • /
    • pp.21-29
    • /
    • 2010
  • In this paper, we propose the system which detects the speaker, who is speaking currently, among a number of users. A proposed speaker detection system based on stereo vision and audio is mainly composed of the followings: a position estimation of speaker candidates using stereo camara and microphone, a current speaker detection, and a speaker information acquisition based on a mobile device. We use the haar-like features and the adaboost algorithm to detect the faces of speaker candidates with stereo camera, and the position of speaker candidates is estimated by a triangulation method. Next, the Time Delay Of Arrival (TDOA) is estimated by the Cross Power Spectrum Phase (CPSP) analysis to find the direction of source with two microphone. Finally we acquire the information of the speaker including his position, voice, and face by comparing the information of the stereo camera with that of two microphone. Furthermore, the proposed system includes a TCP client/server connection method for mobile service.

The design and implementation of Object-based bioimage matching on a Mobile Device (모바일 장치기반의 바이오 객체 이미지 매칭 시스템 설계 및 구현)

  • Park, Chanil;Moon, Seung-jin
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.1-10
    • /
    • 2019
  • Object-based image matching algorithms have been widely used in the image processing and computer vision fields. A variety of applications based on image matching algorithms have been recently developed for object recognition, 3D modeling, video tracking, and biomedical informatics. One prominent example of image matching features is the Scale Invariant Feature Transform (SIFT) scheme. However many applications using the SIFT algorithm have implemented based on stand-alone basis, not client-server architecture. In this paper, We initially implemented based on client-server structure by using SIFT algorithms to identify and match objects in biomedical images to provide useful information to the user based on the recently released Mobile platform. The major methodological contribution of this work is leveraging the convenient user interface and ubiquitous Internet connection on Mobile device for interactive delineation, segmentation, representation, matching and retrieval of biomedical images. With these technologies, our paper showcased examples of performing reliable image matching from different views of an object in the applications of semantic image search for biomedical informatics.

A study on implementation of background subtraction algorithm using LMS algorithm and performance comparative analysis (LMS algorithm을 이용한 배경분리 알고리즘 구현 및 성능 비교에 관한 연구)

  • Kim, Hyun-Jun;Gwun, Taek-Gu;Joo, Yank-Ick;Seo, Dong-Hoan
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.39 no.1
    • /
    • pp.94-98
    • /
    • 2015
  • Recently, with the rapid advancement in information and computer vision technology, a CCTV system using object recognition and tracking has been studied in a variety of fields. However, it is difficult to recognize a precise object outdoors due to varying pixel values by moving background elements such as shadows, lighting change, and moving elements of the scene. In order to adapt the background outdoors, this paper presents to analyze a variety of background models and proposed background update algorithms based on the weight factor. The experimental results show that the accuracy of object detection is maintained, and the number of misrecognized objects are reduced compared to previous study by using the proposed algorithm.

Odor Cognition and Source Tracking of an Intelligent Robot based upon Wireless Sensor Network (센서 네트워크 기반 지능 로봇의 냄새 인식 및 추적)

  • Lee, Jae-Yeon;Kang, Geun-Taek;Lee, Won-Chang
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.1
    • /
    • pp.49-54
    • /
    • 2011
  • In this paper, we represent a mobile robot which can recognize chemical odor, measure concentration, and track its source indoors. The mobile robot has the function of smell that can sort several gases in experiment such as ammonia, ethanol, and their mixture with neural network algorithm and measure each gas concentration with fuzzy rules. In addition, it can not only navigate to the desired position with vision system by avoiding obstacles but also transmit odor information and warning messages earned from its own operations to other nodes by multi-hop communication in wireless sensor network. We suggest the way of odor sorting, concentration measurement, and source tracking for a mobile robot in wireless sensor network using a hybrid algorithm with vision system and gas sensors. The experimental studies prove that the efficiency of the proposed algorithm for odor recognition, concentration measurement, and source tracking.