• Title/Summary/Keyword: Optical flow algorithm

Search Result 189, Processing Time 0.022 seconds

Study on Co-Simulation Method of Dynamics and Guidance Algorithms for Strap-Down Image Tracker Using Unity3D (Unity3D를 이용한 스트랩 다운 영상 추적기의 동역학 및 유도 법칙 알고리즘의 상호-시뮬레이션 방법에 관한 연구)

  • Marin, Mikael;Kim, Taeho;Bang, Hyochoong;Cho, Hanjin;Cho, Youngki;Choi, Yonghoon
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.46 no.11
    • /
    • pp.911-920
    • /
    • 2018
  • In this study, we performed a study to track the angle between the guided weapon and the target by using the strap-down image seeker, and constructed a test bed that can simulate it visually. This paper describes a method to maintain high-performance feature distribution in the implementation of sparse feature tracking algorithm such as Lucas Kanade's optical flow algorithm for target tracking using image information. We have extended the feature tracking problem to the concept of feature management. To realize this, we constructed visual environment using Unity3D engine and developed image processing simulation using OpenCV. For the co-simulation, dynamic system modeling was performed with Matlab Simulink, the visual environment using Unity3D was constructed, and computer vision work using OpenCV was performed.

An Algorithm of Autonomous Navigation for Mobile Robot using Vision Sensor and Ultrasonic Sensor (비전 센서와 초음파 센서를 이용한 이동 로봇의 자율 주행 알고리즘)

  • Lee, Jae-Kwang;Park, Jong-Hun;Heo, Uk-Yeol
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.19-22
    • /
    • 2003
  • This paper proposes an algorithm for navigation of an autonomous mobile robot with vision sensor. For obstacle avoidance, we used a curvature trajectory method. Using this method, translational and rotational speeds are controlled independently and the mobile robot traces a smooth curvature trajectory that consists of circle trajectories to a target point. While trying to avoid obstacles, the robot fan be goal-directed using curvature trajectory.

  • PDF

Omni-directional Vision SLAM using a Motion Estimation Method based on Fisheye Image (어안 이미지 기반의 움직임 추정 기법을 이용한 전방향 영상 SLAM)

  • Choi, Yun Won;Choi, Jeong Won;Dai, Yanyan;Lee, Suk Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.20 no.8
    • /
    • pp.868-874
    • /
    • 2014
  • This paper proposes a novel mapping algorithm in Omni-directional Vision SLAM based on an obstacle's feature extraction using Lucas-Kanade Optical Flow motion detection and images obtained through fish-eye lenses mounted on robots. Omni-directional image sensors have distortion problems because they use a fish-eye lens or mirror, but it is possible in real time image processing for mobile robots because it measured all information around the robot at one time. In previous Omni-Directional Vision SLAM research, feature points in corrected fisheye images were used but the proposed algorithm corrected only the feature point of the obstacle. We obtained faster processing than previous systems through this process. The core of the proposed algorithm may be summarized as follows: First, we capture instantaneous $360^{\circ}$ panoramic images around a robot through fish-eye lenses which are mounted in the bottom direction. Second, we remove the feature points of the floor surface using a histogram filter, and label the candidates of the obstacle extracted. Third, we estimate the location of obstacles based on motion vectors using LKOF. Finally, it estimates the robot position using an Extended Kalman Filter based on the obstacle position obtained by LKOF and creates a map. We will confirm the reliability of the mapping algorithm using motion estimation based on fisheye images through the comparison between maps obtained using the proposed algorithm and real maps.

Kalman Filtering-based Traffic Prediction for Software Defined Intra-data Center Networks

  • Mbous, Jacques;Jiang, Tao;Tang, Ming;Fu, Songnian;Liu, Deming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.6
    • /
    • pp.2964-2985
    • /
    • 2019
  • Global data center IP traffic is expected to reach 20.6 zettabytes (ZB) by the end of 2021. Intra-data center networks (Intra-DCN) will account for 71.5% of the data center traffic flow and will be the largest portion of the traffic. The understanding of traffic distribution in IntraDCN is still sketchy. It causes significant amount of bandwidth to go unutilized, and creates avoidable choke points. Conventional transport protocols such as Optical Packet Switching (OPS) and Optical Burst Switching (OBS) allow a one-sided view of the traffic flow in the network. This therefore causes disjointed and uncoordinated decision-making at each node. For effective resource planning, there is the need to consider joining the distributed with centralized management which anticipates the system's needs and regulates the entire network. Methods derived from Kalman filters have proved effective in planning road networks. Considering the network available bandwidth as data transport highways, we propose an intelligent enhanced SDN concept applied to OBS architecture. A management plane (MP) is added to conventional control (CP) and data planes (DP). The MP assembles the traffic spatio-temporal parameters from ingress nodes, uses Kalman filtering prediction-based algorithm to estimate traffic demand. Prior to packets arrival at edges nodes, it regularly forwards updates of resources allocation to CPs. Simulations were done on a hybrid scheme (1+1) and on the centralized OBS. The results demonstrated that the proposition decreases the packet loss ratio. It also improves network latency and throughput-up to 84 and 51%, respectively, versus the traditional scheme.

Getting On and Off an Elevator Safely for a Mobile Robot Using RGB-D Sensors (RGB-D 센서를 이용한 이동로봇의 안전한 엘리베이터 승하차)

  • Kim, Jihwan;Jung, Minkuk;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.1
    • /
    • pp.55-61
    • /
    • 2020
  • Getting on and off an elevator is one of the most important parts for multi-floor navigation of a mobile robot. In this study, we proposed the method for the pose recognition of elevator doors, safe path planning, and motion estimation of a robot using RGB-D sensors in order to safely get on and off the elevator. The accurate pose of the elevator doors is recognized using a particle filter algorithm. After the elevator door is open, the robot builds an occupancy grid map including the internal environments of the elevator to generate a safe path. The safe path prevents collision with obstacles in the elevator. While the robot gets on and off the elevator, the robot uses the optical flow algorithm of the floor image to detect the state that the robot cannot move due to an elevator door sill. The experimental results in various experiments show that the proposed method enables the robot to get on and off the elevator safely.

Head Pose Estimation by using Morphological Property of Disparity Map

  • Jun, Se-Woong;Park, Sung-Kee;Lee, Moon-Key
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.735-739
    • /
    • 2005
  • This paper presents a new system to estimate the head pose of human in interactive indoor environment that has dynamic illumination change and large working space. The main idea of this system is to suggest a new morphological feature for estimating head angle from stereo disparity map. When a disparity map is obtained from stereo camera, the matching confidence value can be derived by measurements of correlation of the stereo images. Applying a threshold to the confidence value, we also obtain the specific morphology of the disparity map. Therefore, we can obtain the morphological shape of disparity map. Through the analysis of this morphological property, the head pose can be estimated. It is simple and fast algorithm in comparison with other algorithm which apply facial template, 2D, 3D models and optical flow method. Our system can automatically segment and estimate head pose in a wide range of head motion without manual initialization like other optical flow system. As the result of experiments, we obtained the reliable head orientation data under the real-time performance.

  • PDF

Development of a Vision-based Lane Change Assistance System for Safe Driving (안전주행을 위한 비전 기반의 차선변경보조시스템 개발)

  • Sung, Jun-Yong;Han, Min-Hong;Ro, Kwang-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.5 s.43
    • /
    • pp.329-336
    • /
    • 2006
  • This paper describes a lane change assistance system for the help of safe lane change, which detects vehicles approaching from the rear side by using a computer vision algorithm and notifies the possibility of safe lane change to a driver. In case a driver tries to lane change, the proposed system can detect vehicles and keep track of them. After detecting side lane lines, region of interest for vehicle detection is decided. For detection a vehicle, optical flow technique is applied. The experimental result of the proposed algorithm and system showed that the vehicle detection rate was 91% and the embedded system would have application to a lane change assistance system being commercialized in the near future.

  • PDF

Cyber Character Implementation with Recognition and Synthesis of Speech/lmage (음성/영상의 인식 및 합성 기능을 갖는 가상캐릭터 구현)

  • Choe, Gwang-Pyo;Lee, Du-Seong;Hong, Gwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.37 no.5
    • /
    • pp.54-63
    • /
    • 2000
  • In this paper, we implemented cyber character that can do speech recognition, speech synthesis, Motion tracking and 3D animation. For speech recognition, we used Discrete-HMM algorithm with K-means 128 level vector quantization and MFCC feature vector. For speech synthesis, we used demi-syllables TD-PSOLA algorithm. For PC based Motion tracking, we present Fast Optical Flow like Method. And for animating 3D model, we used vertex interpolation with DirectSD retained mode. Finally, we implemented cyber character integrated above systems, which game calculating by the multiplication table with user and the cyber character always look at user using of Motion tracking system.

  • PDF

Moving object segmentation using Markov Random Field (마코프 랜덤 필드를 이용한 움직이는 객체의 분할에 관한 연구)

  • 정철곤;김중규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.221-230
    • /
    • 2002
  • This paper presents a new moving object segmentation algorithm using markov random field. The algorithm is based on signal detection theory. That is to say, motion of moving object is decided by binary decision rule, and false decision is corrected by markov random field model. The procedure toward complete segmentation consists of two steps: motion detection and object segmentation. First, motion detection decides the presence of motion on velocity vector by binary decision rule. And velocity vector is generated by optical flow. Second, object segmentation cancels noise by Bayes rule. Experimental results demonstrate the efficiency of the presented method.

Development of Interactive Video Using Real-time Optical Flow and Masking (옵티컬 플로우와 마스킹에 의한 실시간 인터렉티브 비디오 개발)

  • Kim, Tae-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.6
    • /
    • pp.98-105
    • /
    • 2011
  • Recent advances in computer technologies support real-time image processing and special effects on personal computers. This paper presents and analyzes a real-time interactive video system. The motivation of this work is to realize an artistic concept that aims at transforming the timeline visual variations in a video of sea water waves into sound in order to provide an audience with an experience of overlapping themselves onto the nature. In practice, the video of sea water waves taken on a beach is processed using an optical flow algorithm in order to extract the information of visual variations between the video frames. This is then masked by the silhouette of an audience and the result is projected on a gallery space. The intensity information is extracted from the resulting video and translated into piano sounds accordingly. This work generates an interactive space realizing the intended concept.