• Title/Summary/Keyword: Detection-by-tracking

Search Result 797, Processing Time 0.029 seconds

Development of Real-Time Vision-based Eye-tracker System for Head Mounted Display (영상정보를 이용한 HMD용 실시간 아이트랙커 시스템)

  • Roh, Eun-Jung;Hong, Jin-Sung;Bang, Hyo-Choong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.35 no.6
    • /
    • pp.539-547
    • /
    • 2007
  • In this paper, development and tests of a real-time eye-tracker system are discussed. The tracker system tracks a user's gaze point through movement of eyes by means of vision-based pupil detection. The vision-based method has an advantage of detecting the exact positions of user's eyes. An infrared camera and a LED are used to acquire a user's pupil image and to extract pupil region, which was hard to extract with software only, from the obtained image, respectively. We develop a pupil-tracking algorithm with Kalman filter and grab the pupil images by using DSP(Digital Signal Processing) system for real-time image processing technique. The real-time eye-tracker system tracks the movements of user's pupils to project their gaze point onto a background image.

Robust Viewpoint Estimation Algorithm for Moving Parallax Barrier Mobile 3D Display (이동형 패럴랙스 배리어 모바일 3D 디스플레이를 위한 강인한 시청자 시역 위치 추정 알고리즘)

  • Kim, Gi-Seok;Cho, Jae-Soo;Um, Gi-Mun
    • Journal of Broadcast Engineering
    • /
    • v.17 no.5
    • /
    • pp.817-826
    • /
    • 2012
  • This paper presents a robust viewpoint estimation algorithm for Moving Parallax Barrier mobile 3D display in sudden illumination changes. We analyze the previous viewpoint estimation algorithm that consists of the Viola-Jones face detector and the feature tracking by the Optical-Flow. The sudden changes in illumination decreases the performance of the Optical-flow feature tracker. In order to solve the problem, we define a novel performance measure for the Optical-Flow tracker. The overall performance can be increased by the selective adoption of the Viola-Jones detector and the Optical-flow tracker depending on the performance measure. Various experimental results show the effectiveness of the proposed method.

Simulation of Dynamic EADs Jamming Performance against Tracking Radar in Presence of Airborne Platform

  • Rim, Jae-Won;Jung, Ki-Hwan;Koh, Il-Suek;Baek, Chung;Lee, Seungsoo;Choi, Seung-Ho
    • International Journal of Aeronautical and Space Sciences
    • /
    • v.16 no.3
    • /
    • pp.475-483
    • /
    • 2015
  • We propose a numerical scheme to simulate the time-domain echo signals at tracking radar for a realistic scenario where an EAD (expendable active decoy) and an airborne target are both in dynamic states. On various scenarios where the target takes different maneuvers, the trajectories of the EAD ejected from the target are accurately calculated by solving 6-DOF (Degree-of-Freedom) equations of the motion for the EAD. At each sampling time of the echo signal, the locations of the EAD and the target are assumed to be fixed. Thus, the echo power from the EAD can be simply calculated by using the Friis transmission formula. The returned power from the target can be computed based on the pre-calculated scattering matrix of the target. In this paper, an IPO (iterative physical optics) method is used to construct the scattering matrix database of the target. The sinc function-interpolation formulation (sampling theorem) is applied to compute the scattering at any incidence angle from the database. A simulator is developed based on the proposed scheme to estimate the echo signals, which can consider the movement of the airborne target and EAD, also the scattering of the target and the RF specifications of the EAD. For applications, we consider the detection probability of the target in the presence of the EAD based on Monte Carlo simulation.

An Camera Information Detection Method for Dynamic Scene (Dynamic scene에 대한 카메라 정보 추출 기법)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.275-280
    • /
    • 2013
  • In this paper, a new stereo object extraction algorithm using a block-based MSE (mean square error) algorithm and the configuration parameters of a stereo camera is proposed. That is, by applying the SSD algorithm between the initial reference image and the next stereo input image, location coordinates of a target object in the right and left images are acquired and then with these values, the pan/tilt system is controlled. And using the moving angle of this pan/tilt system and the configulation parameters of the stereo camera system, the mask window size of a target object is adaptively determined. The newly segmented target image is used as a reference image in the next stage and it is automatically updated in the course of target tracking basing on the same procedure. Meanwhile, a target object is under tracking through continuously controlling the convergence and FOV by using the sequentiall extracted location coordinates of a moving target.

Biomimetic approach object detection sensors using multiple imaging (다중 영상을 이용한 생체모방형 물체 접근 감지 센서)

  • Choi, Myoung Hoon;Kim, Min;Jeong, Jae-Hoon;Park, Won-Hyeon;Lee, Dong Heon;Byun, Gi-Sik;Kim, Gwan-Hyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.05a
    • /
    • pp.91-93
    • /
    • 2016
  • From the 2-D image extracting three-dimensional information as the latter is in the bilateral sibeop using two camera method and when using a monocular camera as a very important step generally as "stereo vision". There in today's CCTV and automatic object tracking system used in many medium much to know the site conditions or work developed more clearly by using a stereo camera that mimics the eyes of humans to maximize the efficiency of avoidance / control start and multiple jobs can do. Object tracking system of the existing 2D image will have but can not recognize the distance to the transition could not be recognized by the observer display using a parallax of a stereo image, and the object can be more effectively controlled.

  • PDF

Development of Unmanned Video Recording System using Mobile (모바일을 이용한 무인 영상 녹화 시스템 개발)

  • Ahn, Byeongtae
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.6
    • /
    • pp.254-260
    • /
    • 2019
  • Recently, a self-camera that generates and distributes a large amount of moving images has been rapidly increasing due to the appearance of SNS such as Facebook, Instagram, and Tweet using mobile. In particular, the amount of SNS connections using mobile phones is significantly increasing in terms of usage, number of connections, and usage time. However, the use of a self-recording system using a smartphone by itself is extremely limited not only in terms of usage but also in frequency of use. In addition, the conventional unattended recording system is a very expensive system that automatically records and tracks an object to be photographed using an infrared signal. Therefore, this paper developed a low cost unmanned recording system using mobile phone. The system consists of a commercial mobile camera, a servomotor for moving the camera from side to side, a microcontroller for controlling the motor, and a commercial wireless Bluetooth earset for video audio input. And it is an unmanned automation system using mobile, and anyone can record image by self image tracking.

Research on Human Posture Recognition System Based on The Object Detection Dataset (객체 감지 데이터 셋 기반 인체 자세 인식시스템 연구)

  • Liu, Yan;Li, Lai-Cun;Lu, Jing-Xuan;Xu, Meng;Jeong, Yang-Kwon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.111-118
    • /
    • 2022
  • In computer vision research, the two-dimensional human pose is a very extensive research direction, especially in pose tracking and behavior recognition, which has very important research significance. The acquisition of human pose targets, which is essentially the study of how to accurately identify human targets from pictures, is of great research significance and has been a hot research topic of great interest in recent years. Human pose recognition is used in artificial intelligence on the one hand and in daily life on the other. The excellent effect of pose recognition is mainly determined by the success rate and the accuracy of the recognition process, so it reflects the importance of human pose recognition in terms of recognition rate. In this human body gesture recognition, the human body is divided into 17 key points for labeling. Not only that but also the key points are segmented to ensure the accuracy of the labeling information. In the recognition design, use the comprehensive data set MS COCO for deep learning to design a neural network model to train a large number of samples, from simple step-by-step to efficient training, so that a good accuracy rate can be obtained.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF

Hardware Design of SURF-based Feature extraction and description for Object Tracking (객체 추적을 위한 SURF 기반 특이점 추출 및 서술자 생성의 하드웨어 설계)

  • Do, Yong-Sig;Jeong, Yong-Jin
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.5
    • /
    • pp.83-93
    • /
    • 2013
  • Recently, the SURF algorithm, which is conjugated for object tracking system as part of many computer vision applications, is a well-known scale- and rotation-invariant feature detection algorithm. The SURF, due to its high computational complexity, there is essential to develop a hardware accelerator in order to be used on an IP in embedded environment. However, the SURF requires a huge local memory, causing many problems that increase the chip size and decrease the value of IP in ASIC and SoC system design. In this paper, we proposed a way to design a SURF algorithm in hardware with greatly reduced local memory by partitioning the algorithms into several Sub-IPs using external memory and a DMA. To justify validity of the proposed method, we developed an example of simplified object tracking algorithm. The execution speed of the hardware IP was about 31 frame/sec, the logic size was about 74Kgate in the 30nm technology with 81Kbytes local memory in the embedded system platform consisting of ARM Cortex-M0 processor, AMBA bus(AHB-lite and APB), DMA and a SDRAM controller. Hence, it can be used to the hardware IP of SoC Chip. If the image processing algorithm akin to SURF is applied to the method proposed in this paper, it is expected that it can implement an efficient hardware design for target application.

Hand Gesture Recognition Algorithm Robust to Complex Image (복잡한 영상에 강인한 손동작 인식 방법)

  • Park, Sang-Yun;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.7
    • /
    • pp.1000-1015
    • /
    • 2010
  • In this paper, we propose a novel algorithm for hand gesture recognition. The hand detection method is based on human skin color, and we use the boundary energy information to locate the hand region accurately, then the moment method will be employed to locate the hand palm center. Hand gesture recognition can be separated into 2 step: firstly, the hand posture recognition: we employ the parallel NNs to deal with problem of hand posture recognition, pattern of a hand posture can be extracted by utilize the fitting ellipses method, which separates the detected hand region by 12 ellipses and calculates the white pixels rate in ellipse line. the pattern will be input to the NNs with 12 input nodes, the NNs contains 4 output nodes, each output node out a value within 0~1, the posture is then represented by composed of the 4 output codes. Secondly, the hand gesture tracking and recognition: we employed the Kalman filter to predict the position information of gesture to create the position sequence, distance relationship between positions will be used to confirm the gesture. The simulation have been performed on Windows XP to evaluate the efficiency of the algorithm, for recognizing the hand posture, we used 300 training images to train the recognizing machine and used 200 images to test the machine, the correct number is up to 194. And for testing the hand tracking recognition part, we make 1200 times gesture (each gesture 400 times), the total correct number is 1002 times. These results shows that the proposed gesture recognition algorithm can achieve an endurable job for detecting the hand and its' gesture.