• Title/Summary/Keyword: video tracking

Search Result 611, Processing Time 0.024 seconds

Real-Time Landmark Detection using Fast Fourier Transform in Surveillance (서베일런스에서 고속 푸리에 변환을 이용한 실시간 특징점 검출)

  • Kang, Sung-Kwan;Park, Yang-Jae;Chung, Kyung-Yong;Rim, Kee-Wook;Lee, Jung-Hyun
    • Journal of Digital Convergence
    • /
    • v.10 no.7
    • /
    • pp.123-128
    • /
    • 2012
  • In this paper, we propose a landmark-detection system of object for more accurate object recognition. The landmark-detection system of object becomes divided into a learning stage and a detection stage. A learning stage is created an interest-region model to set up a search region of each landmark as pre-information necessary for a detection stage and is created a detector by each landmark to detect a landmark in a search region. A detection stage sets up a search region of each landmark in an input image with an interest-region model created in the learning stage. The proposed system uses Fast Fourier Transform to detect landmark, because the landmark-detection is fast. In addition, the system fails to track objects less likely. After we developed the proposed method was applied to environment video. As a result, the system that you want to track objects moving at an irregular rate, even if it was found that stable tracking. The experimental results show that the proposed approach can achieve superior performance using various data sets to previously methods.

Dynamic Hand Gesture Recognition using Guide Lines (가이드라인을 이용한 동적 손동작 인식)

  • Kim, Kun-Woo;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.1-9
    • /
    • 2010
  • Generally, dynamic hand gesture recognition is formed through preprocessing step, hand tracking step and hand shape detection step. In this paper, we present advanced dynamic hand gesture recognizing method that improves performance in preprocessing step and hand shape detection step. In preprocessing step, we remove noise fast by using dynamic table and detect skin color exactly on complex background for controling skin color range in skin color detection method using YCbCr color space. Especially, we increase recognizing speed in hand shape detection step through detecting Start Image and Stop Image, that are elements of dynamic hand gesture recognizing, using Guideline. Guideline is edge of input hand image and hand shape for comparing. We perform various experiments with nine web-cam video clips that are separated to complex background and simple background for dynamic hand gesture recognition method in the paper. The result of experiment shows similar recognition ratio but high recognition speed, low cpu usage, low memory usage than recognition method using learning exercise.

The Case Study of Elementary School Teachers Who Have Experienced Teacher Participation-oriented Education Program (TPEP) for Elementary School Teachers to Improve Class Expertise in Science Classes - Focusing on Visual Attention - (교사 참여형 교육프로그램(TPEP)을 경험한 초등교사의 과학 수업 전문성 변화 사례 - 시각적 주의를 중심으로 -)

  • Kim, Jang-Hwan;Shin, Won-Sub;Shin, Dong-Hoon
    • Journal of Korean Elementary Science Education
    • /
    • v.39 no.1
    • /
    • pp.133-144
    • /
    • 2020
  • The purpose of this study is to identify the effect of Teacher Participation-oriented Education Program (TPEP) for Elementary School Teachers to Improve Class Expertise in Science Classes with a focus on visual attention. The participants were two elementary school teachers in Seoul and taught science subjects. The lesson topic applied to this study were 'Structure and Function of Our Body' in the second semester of fifth grade and 'Volcano and Earthquake' in the second semester of fourth grade. The mobile eye tracker SMI's ETG 2w, which is a binocular tracking system was used in this study. In this study, the actual practice time, participant's visual attention, visual intake time average, and visual intake time average were analyzed by class phase. The results of the study are as follows. First, as a result of analyzing the actual class execution time, the actual class execution time was almost in line with the lesson plan after the TPEP application. Second, visual attention in the areas related to teaching and learning activities was high after applying TPEP. Factors affecting the progress of the class and cognitive burdens were identified quantitatively and objectively through visual attention. Third, as a result of analyzing the visual intake time average of participants, there was a statistically significant difference in all classes. Fourth, as a result of analyzing the visual intake time average of participants, the results were statistically significant in the introduction(video), activity 1, activity 2, and activity 3 stages in the lecture type class. The Teacher Participation-oriented Education Program (TPEP) for Elementary School Teachers to Improve Class Expertise in Science Classes can extend elementary science class expertise such as self-class analysis, eye tracking, linguistic, gesture, and class design beyond traditional class analysis and consulting.

Development of Stereoscopic 3D Personalized Adventure Game: HowSee (3D 체험형 어드벤처 게임 "HowSee" 개발)

  • Lim, C.J.;Chung, S.T.;Choi, Seung-Hoon;Lee, Chang-Nam;Song, Jang-Sup
    • Journal of Korea Game Society
    • /
    • v.11 no.5
    • /
    • pp.13-22
    • /
    • 2011
  • Advances in display technology, stereoscopic (S3D) in the development of social interest in the content development approach is high but it does not make the standardization of interactive 3D content creation performance is very poor. In this paper, real-time video input from an infrared camera that is based on eye tracking implementation user interface developed 3D interactive adventure game "HowSee" was the process of developing skills in the process, an efficient 3D content production methods were presented. unnormalized S3D-based game content production process on how this development occurred in the trial and how to troubleshoot S3D games, etc. to streamline content creation can be a basis for.

A Forest Fire Detection Algorithm Using Image Information (영상정보를 이용한 산불 감지 알고리즘)

  • Seo, Min-Seok;Lee, Choong Ho
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.3
    • /
    • pp.159-164
    • /
    • 2019
  • Detecting wildfire using only color in image information is a very difficult issue. This paper proposes an algorithm to detect forest fire area by analyzing color and motion of the area in the video including forest fire. The proposed algorithm removes the background region using the Gaussian Mixture based background segmentation algorithm, which does not depend on the lighting conditions. In addition, the RGB channel is changed to an HSV channel to extract flame candidates based on color. The extracted flame candidates judge that it is not a flame if the area moves while labeling and tracking. If the flame candidate areas extracted in this way are in the same position for more than 2 minutes, it is regarded as flame. Experimental results using the implemented algorithm confirmed the validity.

A Study for Detecting a Gazing Point Based on Reference Points (참조점을 이용한 응시점 추출에 관한 연구)

  • Kim, S.I.;Lim, J.H.;Cho, J.M.;Kim, S.H.;Nam, T.W.
    • Journal of Biomedical Engineering Research
    • /
    • v.27 no.5
    • /
    • pp.250-259
    • /
    • 2006
  • The information of eye movement is used in various fields such as psychology, ophthalmology, physiology, rehabilitation medicine, web design, HMI(human-machine interface), and so on. Various devices to detect the eye movement have been developed but they are too expensive. The general methods of eye movement tracking are EOG(electro-oculograph), Purkinje image tracker, scleral search coil technique, and video-oculograph(VOG). The purpose of this study is to embody the algorithm which tracks the location of the gazing point at a pupil. Two kinds of location data were compared to track the gazing point. One is the reference points(infrared LEDs) which is effected from the globe. Another is the center point of the pupil which is gained with a CCD camera. The reference point was captured with the CCD camera and infrared lights which were not recognized by human eyes. Both of images which were thrown and were not thrown an infrared light on the globe were captured and saved. The reflected reference points were detected with the brightness difference between the two saved images. In conclusion, the circumcenter theory of a triangle was used to look for the center of the pupil. The location of the gazing point was relatively indicated with the each center of the pupil and the reference point.

Vision-based Real-time Vehicle Detection and Tracking Algorithm for Forward Collision Warning (전방 추돌 경보를 위한 영상 기반 실시간 차량 검출 및 추적 알고리즘)

  • Hong, Sunghoon;Park, Daejin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.962-970
    • /
    • 2021
  • The cause of the majority of vehicle accidents is a safety issue due to the driver's inattention, such as drowsy driving. A forward collision warning system (FCWS) can significantly reduce the number and severity of accidents by detecting the risk of collision with vehicles in front and providing an advanced warning signal to the driver. This paper describes a low power embedded system based FCWS for safety. The algorithm computes time to collision (TTC) through detection, tracking, distance calculation for the vehicle ahead and current vehicle speed information with a single camera. Additionally, in order to operate in real time even in a low-performance embedded system, an optimization technique in the program with high and low levels will be introduced. The system has been tested through the driving video of the vehicle in the embedded system. As a result of using the optimization technique, the execution time was about 170 times faster than that when using the previous non-optimized process.

Unauthorized person tracking system in video using CNN-LSTM based location positioning

  • Park, Chan;Kim, Hyungju;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.77-84
    • /
    • 2021
  • In this paper, we propose a system that uses image data and beacon data to classify authorized and unauthorized perosn who are allowed to enter a group facility. The image data collected through the IP camera uses YOLOv4 to extract a person object, and collects beacon signal data (UUID, RSSI) through an application to compose a fingerprinting-based radio map. Beacon extracts user location data after CNN-LSTM-based learning in order to improve location accuracy by supplementing signal instability. As a result of this paper, it showed an accuracy of 93.47%. In the future, it can be expected to fusion with the access authentication process such as QR code that has been used due to the COVID-19, track people who haven't through the authentication process.

Escape Route Prediction and Tracking System using Artificial Intelligence (인공지능을 활용한 도주경로 예측 및 추적 시스템)

  • Yang, Bum-Suk;Park, Dea-Woo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.8
    • /
    • pp.1130-1135
    • /
    • 2022
  • In Seoul, about 75,000 CCTVs are installed in 25 district offices. Each ward office has built a control center for CCTV control and is performing 24-hour CCTV video control for the safety of citizens. Seoul Metropolitan Government is building a smart city integrated platform that is safe for citizens by providing CCTV images of the ward office to enable rapid response to emergency/emergency situations by signing an MOU with related organizations. In this paper, when an incident occurs at the Seoul Metropolitan Government Office, the escape route is predicted by discriminating people and vehicles using the AI DNN-based Template Matching technology, MLP algorithm and CNN-based YOLO SPP DNN model for CCTV images. In addition, it is designed to automatically disseminate image information and situation information to adjacent ward offices when vehicles and people escape from the competent ward office. The escape route prediction and tracking system using artificial intelligence can expand the smart city integrated platform nationwide.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF