• Title/Summary/Keyword: Matching and Tracking

Search Result 354, Processing Time 0.025 seconds

Noise Removal Filter Algorithm using Spatial Weight in AWGN Environment (AWGN 환경에서 공간 가중치를 이용한 잡음 제거 필터 알고리즘)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.207-209
    • /
    • 2021
  • In recent years, with the development of artificial intelligence and IoT technology, automation and unmanned technology are in progress in various fields, and the importance of image processing such as object tracking, medical images and object recognition, which are the basis of this, is increasing. In particular, in systems requiring detailed data processing, noise reduction is used as a pre-processing step, but the existing algorithm has a disadvantage that blurring occurs in the filtering process. Therefore, in this paper, we propose a filter algorithm using modified spatial weights to minimize information loss in the filtering process. The proposed algorithm uses mask matching to remove AWGN, and obtains the output of the filter by adding or subtracting the output of the modified spatial weight. The proposed algorithm has superior noise reduction characteristics compared to the existing method and reconstructs the image while minimizing the blurring phenomenon.

  • PDF

A Research on the Vector Search Algorithm for the PIV Flow Analysis of image data with large dynamic range (입자의 이동거리가 큰 영상데이터의 PIV 유동 해석을 위한 속도벡터 추적 알고리즘의 연구)

  • Kim Sung Kyun
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1998.11a
    • /
    • pp.13-18
    • /
    • 1998
  • The practical use of the particle image velocimetry(PIV), a whole-field velocity measurement method, requires the use of fast, reliable, computer-based methods for tracking velocity vectors. The full search block matching, the most widely studied and applied technique both in area of PIV and Image Coding and Compression, is computationally costly. Many less expensive alternatives have been proposed mostly in the area of Image Coding and Compression. Among others, TSS, NTSS, HPM are introduced for the past PIV analysis, and found to be successful. But, these algorithms are based on small dynamic range, 7 pixels/frame in maximum displacement. To analyze the images with large displacement, Even and Odd field image separation and a simple version of multi-resolution hierarchical procedures are introduced in this paper. Comparison with other algorithms are summarized. A Results of application to the turbulent backward step flow shows the improvement of new algorithm.

  • PDF

Implementation of Path Finding Method using 3D Mapping for Autonomous Robotic (3차원 공간 맵핑을 통한 로봇의 경로 구현)

  • Son, Eun-Ho;Kim, Young-Chul;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.14 no.2
    • /
    • pp.168-177
    • /
    • 2008
  • Path finding is a key element in the navigation of a mobile robot. To find a path, robot should know their position exactly, since the position error exposes a robot to many dangerous conditions. It could make a robot move to a wrong direction so that it may have damage by collision by the surrounding obstacles. We propose a method obtaining an accurate robot position. The localization of a mobile robot in its working environment performs by using a vision system and Virtual Reality Modeling Language(VRML). The robot identifies landmarks located in the environment. An image processing and neural network pattern matching techniques have been applied to find location of the robot. After the self-positioning procedure, the 2-D scene of the vision is overlaid onto a VRML scene. This paper describes how to realize the self-positioning, and shows the overlay between the 2-D and VRML scenes. The suggested method defines a robot's path successfully. An experiment using the suggested algorithm apply to a mobile robot has been performed and the result shows a good path tracking.

Performance Evaluation of ARCore Anchors According to Camera Tracking

  • Shinhyup Lee;Leehwan Hwang;Seunghyun Lee;Taewook Kim;Soonchul Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.215-222
    • /
    • 2023
  • Augmented reality (AR), which integrates virtual media into reality, is increasingly utilized across various industrial sectors, thanks to advancements in 3D graphics and mobile device technologies. The IT industry is thus carrying out active R&D activities about AR platforms. Google plays a significant role in the AR landscape, with a focus on ARCore services. An essential aspect of ARCore is the use of anchors, which serve as reference points that help maintain the position and orientation of virtual objects within the physical environment. However, if the accuracy of anchor positioning is suboptimal when running AR content, it can significantly diminish the user's immersive experience. We are to assess the performance of these anchors in this study. To conduct the performance evaluation, virtual 3D objects, matching the shape and size of real-world objects, we strategically positioned ourselves to overlap with their physical counterparts. Images of both real and virtual objects were captured from five distinct camera trajectories, and ARCore's performance was analyzed by examining the difference between these captured images.

A Study on the Environment Recognition System of Biped Robot for Stable Walking (안정적 보행을 위한 이족 로봇의 환경 인식 시스템 연구)

  • Song, Hee-Jun;Lee, Seon-Gu;Kang, Tae-Gu;Kim, Dong-Won;Park, Gwi-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.07d
    • /
    • pp.1977-1978
    • /
    • 2006
  • This paper discusses the method of vision based sensor fusion system for biped robot walking. Most researches on biped walking robot have mostly focused on walking algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since biped walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, systems for environment recognition and tele-operation have been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. For carrying out certain tasks, an object tracking system using modified optical flow algorithm and obstacle recognition system using enhanced template matching and hierarchical support vector machine algorithm by wireless vision camera are implemented with sensor fusion system using other sensors installed in a biped walking robot. Also systems for robot manipulating and communication with user have been developed for robot.

  • PDF

A Ubiquitous Vision System based on the Identified Contract Net Protocol (Identified Contract Net 프로토콜 기반의 유비쿼터스 시각시스템)

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hagbae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.10
    • /
    • pp.620-629
    • /
    • 2005
  • In this paper, a new protocol-based approach was proposed for development of a ubiquitous vision system. It is possible to apply the approach by regarding the ubiquitous vision system as a multiagent system. Thus, each vision sensor can be regarded as an agent (vision agent). Each vision agent independently performs exact segmentation for a target by color and motion information, visual tracking for multiple targets in real-time, and location estimation by a simple perspective transform. Matching problem for the identity of a target during handover between vision agents is solved by the Identified Contract Net (ICN) protocol implemented for the protocol-based approach. The protocol-based approach by the ICN protocol is independent of the number of vision agents and moreover the approach doesn't need calibration and overlapped region between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. The protocol-based approach was successfully applied for our ubiquitous vision system and operated well through several experiments.

Integral Error State Feedback VSC for a DC Servo Position Control System (직류서보 위치제어 시스템을 위한 편차적분 상태궤환 가변구조제어기)

  • 박영진;이기상;홍순찬
    • The Proceedings of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.8 no.3
    • /
    • pp.88-95
    • /
    • 1994
  • A scheme of IESFVSC(Integral Error State Feedback Variable Structure Controller) is proposed for a DC servo position control system with the disturbances which do not satisfy the matching condition. The proposed control system is composed of servo compensator and state feedback VSC. The servo compensator enhances the robustness of the control system against various types of disturbance, and makes effective tracking possible without using error dynamics. The IESFVSC is applied to the practical design of a robust DC servo control system and the control performances are verified through theoretical analyses and simulations.

  • PDF

CCTV Based Gender Classification Using a Convolutional Neural Networks (컨볼루션 신경망을 이용한 CCTV 영상 기반의 성별구분)

  • Kang, Hyun Gon;Park, Jang Sik;Song, Jong Kwan;Yoon, Byung Woo
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.12
    • /
    • pp.1943-1950
    • /
    • 2016
  • Recently, gender classification has attracted a great deal of attention in the field of video surveillance system. It can be useful in many applications such as detecting crimes for women and business intelligence. In this paper, we proposed a method which can detect pedestrians from CCTV video and classify the gender of the detected objects. So far, many algorithms have been proposed to classify people according the their gender. This paper presents a gender classification using convolutional neural network. The detection phase is performed by AdaBoost algorithm based on Haar-like features and LBP features. Classifier and detector is trained with data-sets generated form CCTV images. The experimental results of the proposed method is male matching rate of 89.9% and the results shows 90.7% of female videos. As results of simulations, it is shown that the proposed gender classification is better than conventional classification algorithm.

Terrain Aided Inertial Navigation for Precise Planetary Landing (정밀 행성 착륙을 위한 지형 보조 관성 항법 연구)

  • Jeong, Bo-Young;Choi, Yoon-Hyuk;Jo, Su-Jang;Bang, Hyo-Choong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.38 no.7
    • /
    • pp.673-683
    • /
    • 2010
  • This study investigates Terrain Aided Inertial Navigation(TAIN) which consists of Inertial Navigation System (INS) with the optical sensor for precise planetary landing. Image processing is conducted to extract the feature points between measured terrain data and on-board implemented terrain information. The navigation algorithm with Iterated Extended Kalman Filter(IEKF) can compensate for the navigation error, and provide precise navigation information compared to single INS. Simulation results are used to demonstrate the feasibility of integration to accomplish precise planetary landing. The proposed navigation approach can be implemented to the whole system coupled with guidance and control laws.

Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking (영상기반 물체추적에 의한 소형 쿼드로터의 자세추정 성능향상)

  • Kang, Seokyong;Choi, Jongwhan;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.25 no.5
    • /
    • pp.444-450
    • /
    • 2015
  • The accuracy of small and low cost CCD camera is insufficient to provide data for precisely tracking unmanned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking object by using CCD camera rather than imprecise GPS data. To realize this, UAVs need to recognize their attitude and position in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best important problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated attitude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of an UAV in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a marker on the floor and the estimated UAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the observed and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the image processing results and the experiment.