• Title/Summary/Keyword: Track Recognition

Search Result 186, Processing Time 0.023 seconds

Color Pattern Recognition and Tracking for Multi-Object Tracking in Artificial Intelligence Space (인공지능 공간상의 다중객체 구분을 위한 컬러 패턴 인식과 추적)

  • Tae-Seok Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.2_2
    • /
    • pp.319-324
    • /
    • 2024
  • In this paper, the Artificial Intelligence Space(AI-Space) for human-robot interface is presented, which can enable human-computer interfacing, networked camera conferencing, industrial monitoring, service and training applications. We present a method for representing, tracking, and objects(human, robot, chair) following by fusing distributed multiple vision systems in AI-Space. The article presents the integration of color distributions into particle filtering. Particle filters provide a robust tracking framework under ambiguous conditions. We propose to track the moving objects(human, robot, chair) by generating hypotheses not in the image plane but on the top-view reconstruction of the scene.

Modular Neural Network Recognition System for Robot Endeffector Recognition (로봇 Endeffector 인식을 위한 다중 모듈 신경회로망 인식 시스템)

  • 신진욱;박동선
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.5C
    • /
    • pp.618-626
    • /
    • 2004
  • In this paper, we describe a robot endeffector recognition system based on a Modular Neural Networks (MNN). The proposed recognition system can be used for vision system which track a given object using a sequence of images from a camera unit. The main objective to achieve with the designed MNN is to precisely recognize the given robot endeffector and to minimize the processing time. Since the robot endeffector can be viewed in many different shapes in 3- D space, a MNN structure, which contains a set of feedforwared neural networks, can be more attractive in recognizing the given object. Each single neural network learns the endeffector with a cluster of training patterns. The training MNN patterns for a neural network share the similar characteristics so that they can be easily trained. The trained UM is les s sensitive to noise and it shows the better performance in recognizing the endeffector. The recognition rate of MNN is enhanced by 14% over the single neural network. A vision system with the MNN can precisely recognize the endeffector and place it at the center of a display for a remote operator.

(A Comparison of Gesture Recognition Performance Based on Feature Spaces of Angle, Velocity and Location in HMM Model) (HMM인식기 상에서 방향, 속도 및 공간 특징량에 따른 제스처 인식 성능 비교)

  • 윤호섭;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.430-443
    • /
    • 2003
  • The objective of this paper is to evaluate most useful feature vector space using the angle, velocity and location features from gesture trajectory which extracted hand regions from consecutive input images and track them by connecting their positions. For this purpose, the gesture tracking algorithm using color and motion information is developed. The recognition module is a HMM model to adaptive time various data. The proposed algorithm was applied to a database containing 4,800 alphabetical handwriting gestures of 20 persons who was asked to draw his/her handwriting gestures five times for each of the 48 characters.

A Robust Fingertip Extraction and Extended CAMSHIFT based Hand Gesture Recognition for Natural Human-like Human-Robot Interaction (강인한 손가락 끝 추출과 확장된 CAMSHIFT 알고리즘을 이용한 자연스러운 Human-Robot Interaction을 위한 손동작 인식)

  • Lee, Lae-Kyoung;An, Su-Yong;Oh, Se-Young
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.18 no.4
    • /
    • pp.328-336
    • /
    • 2012
  • In this paper, we propose a robust fingertip extraction and extended Continuously Adaptive Mean Shift (CAMSHIFT) based robust hand gesture recognition for natural human-like HRI (Human-Robot Interaction). Firstly, for efficient and rapid hand detection, the hand candidate regions are segmented by the combination with robust $YC_bC_r$ skin color model and haar-like features based adaboost. Using the extracted hand candidate regions, we estimate the palm region and fingertip position from distance transformation based voting and geometrical feature of hands. From the hand orientation and palm center position, we find the optimal fingertip position and its orientation. Then using extended CAMSHIFT, we reliably track the 2D hand gesture trajectory with extracted fingertip. Finally, we applied the conditional density propagation (CONDENSATION) to recognize the pre-defined temporal motion trajectories. Experimental results show that the proposed algorithm not only rapidly extracts the hand region with accurately extracted fingertip and its angle but also robustly tracks the hand under different illumination, size and rotation conditions. Using these results, we successfully recognize the multiple hand gestures.

Hand Gesture Sequence Recognition using Morphological Chain Code Edge Vector (형태론적 체인코드 에지벡터를 이용한 핸드 제스처 시퀀스 인식)

  • Lee Kang-Ho;Choi Jong-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.4 s.32
    • /
    • pp.85-91
    • /
    • 2004
  • The use of gestures provides an attractive alternate to cumbersome interface devices for human-computer interaction. This has motivated a very active research area concerned with computer vision-based analysis and interpretation of hand gestures The most important issues in gesture recognition are the simplification of algorithm and the reduction of processing time. The mathematical morphology based on geometrical set theory is best used to perform the processing. The key idea of proposed algorithm is to track a trajectory of center points in primitive elements extracted by morphological shape decomposition. The trajectory of morphological center points includes the information on shape orientation. Based on this characteristic we proposed the morphological gesture sequence recognition algorithm using feature vectors calculated to the trajectory of morphological center points. Through the experiment, we demonstrated the efficiency of proposed algorithm.

  • PDF

Using Non-Local Features to Improve Named Entity Recognition Recall

  • Mao, Xinnian;Xu, Wei;Dong, Yuan;He, Saike;Wang, Haila
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2007.11a
    • /
    • pp.303-310
    • /
    • 2007
  • Named Entity Recognition (NER) is always limited by its lower recall resulting from the asymmetric data distribution where the NONE class dominates the entity classes. This paper presents an approach that exploits non-local information to improve the NER recall. Several kinds of non-local features encoding entity token occurrence, entity boundary and entity class are explored under Conditional Random Fields (CRFs) framework. Experiments on SIGHAN 2006 MSRA (CityU) corpus indicate that non-local features can effectively enhance the recall of the state-of-the-art NER systems. Incorporating the non-local features into the NER systems using local features alone, our best system achieves a 23.56% (25.26%) relative error reduction on the recall and 17.10% (11.36%) relative error reduction on the F1 score; the improved F1 score 89.38% (90.09%) is significantly superior to the best NER system with F1 of 86.51% (89.03%) participated in the closed track.

  • PDF

Development of Color Recognition Algorithm for Traffic Lights using Deep Learning Data (딥러닝 데이터 활용한 신호등 색 인식 알고리즘 개발)

  • Baek, Seoha;Kim, Jongho;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.2
    • /
    • pp.45-50
    • /
    • 2022
  • The vehicle motion in urban environment is determined by surrounding traffic flow, which cause understanding the flow to be a factor that dominantly affects the motion planning of the vehicle. The traffic flow in this urban environment is accessed using various urban infrastructure information. This paper represents a color recognition algorithm for traffic lights to perceive traffic condition which is a main information among various urban infrastructure information. Deep learning based vision open source realizes positions of traffic lights around the host vehicle. The data are processed to input data based on whether it exists on the route of ego vehicle. The colors of traffic lights are estimated through pixel values from the camera image. The proposed algorithm is validated in intersection situations with traffic lights on the test track. The results show that the proposed algorithm guarantees precise recognition on traffic lights associated with the ego vehicle path in urban intersection scenarios.

The Bullet Launcher with A Pneumatic System to Detect Objects by Unique Markers

  • Jasmine Aulia;Zahrah Radila;Zaenal Afif Azhary;Aulia M. T. Nasution;Detak Yan Pratama;Katherin Indriawati;Iyon Titok Sugiarto;Wildan Panji Tresna
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.252-260
    • /
    • 2023
  • A bullet launcher can be developed as a smart instrument, especially for use in the military section, that can track, identify, detect, mark, lock, and shoot a target by implementing an image-processing system. In this research, the application of object recognition system, laser encoding as a unique marker, 2-dimensional movement, and pneumatic as a shooter has been studied intensively. The results showed that object recognition system could detect various colors, patterns, sizes, and laser blinking. Measuring the average error value of the object distance by using the camera is ±4, ±5, and ±6% for circle, square and triangle form respectively. Meanwhile, the average accuracy of shots on objects is 95.24% and 85.71% in indoor and outdoor conditions respectively. Here, the average prototype response time is 1.11 s. Moreover, the highest accuracy rate of shooting results at 50 cm was obtained 98.32%.

Study on the Real-time COVID-19 Confirmed Case Web Monitoring System (실시간 코로나19 확진자 웹 모니터링 시스템에 대한 연구)

  • You, Youngkyon;Jo, Seonguk;Ko, Dongbeom;Park, Jeongmin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.1
    • /
    • pp.171-179
    • /
    • 2022
  • This paper introduces a monitoring and tracking system for corona19 confirmed patients based on the collected data by installing a device that can manage the access list at the entrance to each building on the campus. The existing QR-based electronic access list can't measure the temperature of the person entering the building and it is inconvenient that members have to scan their QR codes with a smartphone. In addition, when the state manages information about confirmed patients and contacts on campus, it is not easy for members to quickly share and track information. These could lead to cases where a person is in close contact with an infected person developing another patient. Therefore, this paper introduces a device using face recognition library and a temperature sensor installed at the entrance of each building on the campus, enabling the administrator to monitor the access status and quickly track members of each building in real-time.

Research on Drivable Road Area Recognition and Real-Time Tracking Techniques Based on YOLOv8 Algorithm (YOLOv8 알고리즘 기반의 주행 가능한 도로 영역 인식과 실시간 추적 기법에 관한 연구)

  • Jung-Hee Seo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.563-570
    • /
    • 2024
  • This paper proposes a method to recognize and track drivable lane areas to assist the driver. The main topic is designing a deep-based network that predicts drivable road areas using computer vision and deep learning technology based on images acquired in real time through a camera installed in the center of the windshield inside the vehicle. This study aims to develop a new model trained with data directly obtained from cameras using the YOLO algorithm. It is expected to play a role in assisting the driver's driving by visualizing the exact location of the vehicle on the actual road consistent with the actual image and displaying and tracking the drivable lane area. As a result of the experiment, it was possible to track the drivable road area in most cases, but in bad weather such as heavy rain at night, there were cases where lanes were not accurately recognized, so improvement in model performance is needed to solve this problem.