• Title/Summary/Keyword: video navigation

Search Result 162, Processing Time 0.019 seconds

Implementation of the Integrated Navigation Parameter Extraction from the Aerial Image Sequence Using TMS320C80 MVP (TMS320C80 MVP 상에서의 연속항공영상으리 이용한 통합 항법 변수 추출 시스템 구현)

  • Sin, Sang-Yun;Park, In-Jun;Lee, Yeong-Sam;Lee, Min-Gyu;Kim, Gwan-Seok;Jeong, Dong-Uk;Kim, In-Cheol;Park, Rae-Hong;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.3
    • /
    • pp.49-57
    • /
    • 2002
  • In this paper, we deal with a real time implementation of the integrated image-based navigation parameter extraction system using the TMS320C80 MVP(multimedia video processor). Our system consists of relative position estimation and absolute position compensation, which is further divided into high-resolution aerial image matching, DEM(Digital elevation model) matching, and IRS (Indian remote sensing) satellite image matching. Those algorithms are implemented in real time using the MVP. To achieve a real-time operation, an attempt is made to partition the aerial image and process the partitioned images in parallel using the four parallel processors in the MVP. We also examine the performance of the implemented integrated system in terms of the estimation accuracy, confirming a proper operation of the our system.

Moving Object Detection and Tracking in Multi-view Compressed Domain (비디오 압축 도메인에서 다시점 카메라 기반 이동체 검출 및 추적)

  • Lee, Bong-Ryul;Shin, Youn-Chul;Park, Joo-Heon;Lee, Myeong-Jin
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.1
    • /
    • pp.98-106
    • /
    • 2013
  • In this paper, we propose a moving object detection and tracking method for multi-view camera environment. Based on the similarity and characteristics of motion vectors and coding block modes extracted from compressed bitstreams, validation of moving blocks, labeling of the validated blocks, and merging of neighboring blobs are performed. To continuously track objects for temporary stop, crossing, and overlapping events, a window based object updating algorithm is proposed for single- and multi-view environments. Object detection and tracking could be performed with an acceptable level of performance without decoding of video bitstreams for normal, temporary stop, crossing, and overlapping cases. The rates of detection and tracking are over 89% and 84% in multi-view environment, respectively. The rates for multi-view environment are improved by 6% and 7% compared to those of single-view environment.

Design and Implementation of Mobile Network Based Long-Range UAV Operational System for Multiple Clients (모바일 네트워크를 이용한 복수의 클라이언트용 무인항공기 원거리 운용 시스템 설계 및 구현)

  • Park, Seong-hyeon;Song, Joon-beom;Roh, Min-shik;Song, Woo-jin;Kang, Beom-soo
    • Journal of Advanced Navigation Technology
    • /
    • v.19 no.3
    • /
    • pp.217-223
    • /
    • 2015
  • This paper describes the design and implementation of a network system for UAV for multiple clients that enables long-range operation based on a commercial mobile network. A prototype data modem is developed with a commercial embedded M2M module in order to provide an access to the mobile network. A central server with a database is constructed to record all of real-time flight and video data and communicate with a ground control system. A GCS is developed for the central control, the single UAV and the smart phone version to be used for different purposes. Performance tests were progressed for data delay, video frame rate and state of clients. Flight tests were also performed to verify the reliability of the modem with respect to altitude.

Fast Object Classification Using Texture and Color Information for Video Surveillance Applications (비디오 감시 응용을 위한 텍스쳐와 컬러 정보를 이용한 고속 물체 인식)

  • Islam, Mohammad Khairul;Jahan, Farah;Min, Jae-Hong;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.1
    • /
    • pp.140-146
    • /
    • 2011
  • In this paper, we propose a fast object classification method based on texture and color information for video surveillance. We take the advantage of local patches by extracting SURF and color histogram from images. SURF gives intensity content information and color information strengthens distinctiveness by providing links to patch content. We achieve the advantages of fast computation of SURF as well as color cues of objects. We use Bag of Word models to generate global descriptors of a region of interest (ROI) or an image using the local features, and Na$\ddot{i}$ve Bayes model for classifying the global descriptor. In this paper, we also investigate discriminative descriptor named Scale Invariant Feature Transform (SIFT). Our experiment result for 4 classes of the objects shows 95.75% of classification rate.

Implementation of Golf Swing Accuracy Analysis System using Smart Sensor (스마트 센서를 활용한 골프 스윙 정확도 분석시스템 구현)

  • Ju, Jae-han
    • Journal of Advanced Navigation Technology
    • /
    • v.21 no.2
    • /
    • pp.200-205
    • /
    • 2017
  • Modern sports are developing into sports science that incorporates science and various analytical simulation systems for improving records are being developed, and they are helping to improve actual game records. Therefore golf which is one of various sports events, has been popularized among the hobbyists and the general public and there is an increasing demand for correcting the movement attitude of the person. In response to these demands, many systems have been developed to analyze and correct golf swing postures. The golf swing accuracy analysis system analyzes the moments that can not be seen with the naked eye and guides them to understand easily. It can improve the golf swing motion through immediate feedback due to the visual effect. Using the knowledge of golf swing motion collected from golf swing video, we improved reliability. In addition, it provides the ability to visually check and analyze your golf swing video, allowing you to analyze each segment based on various golf swing classification methods.

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.

Robust Threshold Determination on Various Lighting for Marker-based Indoor Navigation (마커 방식 실내 내비게이션을 위한 조명 변화에 강한 임계값 결정 방법)

  • Choi, Tae-Woong;Lee, Hyun-Cheol;Hur, Gi-Taek;Kim, Eun-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.1
    • /
    • pp.1-8
    • /
    • 2012
  • In this paper, a method of determining the optimal threshold in image binarization for the marker recognition is suggested to resolve the problem that the performances of marker recognition are quite different according to the changes of indoor lighting. The suggested method determines the optimal threshold by considering the average brightness, the standard deviation and the maximum deviation of video image under the various indoor lighting circumstances, such as bright light, dim light, and shadow by unspecified obstacles. In particular, the recognition under the gradation lighting by shadow is improved by applying the weighted value that depends on the brightness of image. The suggested method is experimented to process $720{\times}480$ resolution video images under the various lighting environments, and it shows the fast and high performance, which is suitable for mobile indoor navigation.

An Efficient Hardware Implementation of CABAC Using H/W-S/W Co-design (H/W-S/W 병행설계를 이용한 CABAC의 효율적인 하드웨어 구현)

  • Cho, Young-Ju;Ko, Hyung-Hwa
    • Journal of Advanced Navigation Technology
    • /
    • v.18 no.6
    • /
    • pp.600-608
    • /
    • 2014
  • In this paper, CABAC H/W module is developed using co-design method. After entire H.264/AVC encoder was developed with C using reference SW(JM), CABAC H/W IP is developed as a block in H.264/AVC encoder. Context modeller of CABAC is included on the hardware to update the changed value during binary encoding, which enables the efficient usage of memory and the efficient design of I/O stream. Hardware IP is co-operated with the reference software JM of H.264/AVC, and executed on Virtex-4 FX60 FPGA on ML410 board. Functional simulation is done using Modelsim. Compared with existing H/W module of CABAC with register-level design, the development time is reduced greatly and software engineer can design H/W module more easily. As a result, the used amount of slice in CABAC is less than 1/3 of that of CAVLC module. The proposed co-design method is useful to provide hardware accelerator in need of speed-up of high efficient video encoder in embedded system.

Development and Performance Evaluation of an Image Detection System for Efficient 4D Images (효율적인 4D 영상을 위한 영상 검출 시스템 개발 및 성능평가)

  • Cho, Kyoung-Woo;Liu, Ze-Qi;Jeon, Min-Ho;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.792-797
    • /
    • 2013
  • 4D film is just a film that made by adding some physical effects to 3D film or general film. In order to provide physical effects to the audience, the data that make the physical effect must be added to each frames. In this paper, we proposed a video detection system that can efficiently provide physical effects by assessing the present situation such as explosion scene, snowing scene. The proposed video detection system contains an algorithm for fire detection by using R color and $C_r$ value, and also an algorithm for snow detection by using RGB color model. The system constitutes in a MCU that from 8051 family. In the performance evaluations, the result shows that 91% of detection rate in case of fire and 25% of false detection rate in case of snow. Also the system is capable of providing physical effects automatically.

Protection Design for EMI and Indirect Lightning Effect for RS-170a Video Signal (RS-170a 영상 신호에 대한 EMI 및 낙뢰 간접영향 보호 설계)

  • Cho, Seong-jin;Sim, Yong-gi;Kim, Sung-hun;Park, Jun-hyun
    • Journal of Advanced Navigation Technology
    • /
    • v.23 no.5
    • /
    • pp.444-451
    • /
    • 2019
  • In this paper, we introduce the design consideration of the EMI and lightning induced transient protection circuit for RS-170a video signal on the avionics equipment. Avionics equipment is subject to the risk of malfunction or physical damage due to indirect lightning effect from lightning strike or electromagnetic interference from external environment. So in order to protect the avionics equipment from these effects, we should analyze the effect of electromagnetic interference and lightning strike on aircraft and apply protection design for each avionics device, but protection circuit may cause signal distortion if signal level is low and frequency is high. In this paper, we introduce common protection design for EMI and indirect lightning effect, and consideration for minimize signal distortion caused by protection circuit for RS-170a. In addition, we show some example of improvements to the actual equipment design using consideration discussed in this paper.