• Title/Summary/Keyword: Optical flow algorithm

Search Result 189, Processing Time 0.029 seconds

Automatic Jitter Evaluation Method from Video using Optical Flow (Optical Flow를 사용한 동영상의 흔들림 자동 평가 방법)

  • Baek, Sang Hyune;Hwang, WonJun
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.8
    • /
    • pp.1236-1247
    • /
    • 2017
  • In this paper, we propose a method for evaluating the uncomfortable shaking in the video. When you shoot a video using a handheld device, such as a smartphone, most of the video contains unwanted shake. Most of these fluctuations are caused by hand tremors that occurred during shooting, and many methods for correcting them automatically have been proposed. It is necessary to evaluate the shake correction performance in order to compare the proposed shake correction methods. However, since there is no standardized performance evaluation method, a correction performance evaluation method is proposed for each shake correction method. Therefore, it is difficult to make objective comparison of shake correction method. In this paper, we propose a method for objectively evaluating video shake. Automatically analyze the video to find out how much tremors are included in the video and how much the tremors are concentrated at a specific time. In order to measure the shaking index, we proposed jitter modeling. We applied the algorithm implemented by Optical Flow to the real video to automatically measure shaking frequency. Finally, we analyzed how the shaking indices appeared after applying three different image stabilization methods to nine sample videos.

Flow Scheduling in OBS Networks Based on Software-Defined Networking Control Plane

  • Tang, Wan;Chen, Fan;Chen, Min;Liu, Guo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.1-17
    • /
    • 2016
  • The separated management and operation of commercial IP/optical multilayer networks makes network operators look for a unified control plane (UCP) to reduce their capital and operational expenditure. Software-defined networking (SDN) provides a central control plane with a programmable mechanism, regarded as a promising UCP for future optical networks. The general control and scheduling mechanism in SDN-based optical burst switching (OBS) networks is insufficient so the controller has to process a large number of messages per second, resulting in low network resource utilization. In view of this, this paper presents the burst-flow scheduling mechanism (BFSM) with a proposed scheduling algorithm considering channel usage. The simulation results show that, compared with the general control and scheduling mechanism, BFSM provides higher resource utilization and controller performance for the SDN-based OBS network in terms of burst loss rate, the number of messages to which the controller responds, and the average latency of the controller to process a message.

Optical Flow-Based Marker Tracking Algorithm for Collaboration Between Drone and Ground Vehicle (드론과 지상로봇 간의 협업을 위한 광학흐름 기반 마커 추적방법)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.107-112
    • /
    • 2018
  • In this paper, optical flow based keypoint detection and tracking technique is proposed for the collaboration between flying drone with vision system and ground robots. There are many challenging problems in target detection research using moving vision system, so we combined the improved FAST algorithm and Lucas-Kanade method for adopting the better techniques in each feature detection and optical flow motion tracking, which results in 40% higher in processing speed than previous works. Also, proposed image binarization method which is appropriate for the given marker helped to improve the marker detection accuracy. We also studied how to optimize the embedded system which is operating complex computations for intelligent functions in a very limited resources while maintaining the drone's present weight and moving speed. In a future works, we are aiming to develop collaborating smarter robots by using the techniques of learning and recognizing targets even in a complex background.

Multi-Region based Radial GCN algorithm for Human action Recognition (행동인식을 위한 다중 영역 기반 방사형 GCN 알고리즘)

  • Jang, Han Byul;Lee, Chil Woo
    • Smart Media Journal
    • /
    • v.11 no.1
    • /
    • pp.46-57
    • /
    • 2022
  • In this paper, multi-region based Radial Graph Convolutional Network (MRGCN) algorithm which can perform end-to-end action recognition using the optical flow and gradient of input image is described. Because this method does not use information of skeleton that is difficult to acquire and complicated to estimate, it can be used in general CCTV environment in which only video camera is used. The novelty of MRGCN is that it expresses the optical flow and gradient of the input image as directional histograms and then converts it into six feature vectors to reduce the amount of computational load and uses a newly developed radial type network model to hierarchically propagate the deformation and shape change of the human body in spatio-temporal space. Another important feature is that the data input areas are arranged being overlapped each other, so that information is not spatially disconnected among input nodes. As a result of performing MRGCN's action recognition performance evaluation experiment for 30 actions, it was possible to obtain Top-1 accuracy of 84.78%, which is superior to the existing GCN-based action recognition method using skeleton data as an input.

Moving object detection for biped walking robot flatfrom (이족로봇 플랫폼을 위한 동체탐지)

  • Kang, Tae-Koo;Hwang, Sang-Hyun;Kim, Dong-Won;Park, Gui-Tae
    • Proceedings of the KIEE Conference
    • /
    • 2006.10c
    • /
    • pp.570-572
    • /
    • 2006
  • This paper discusses the method of moving object detection for biped robot walking. Most researches on vision based object detection have mostly focused on fixed camera based algorithm itself. However, developing vision systems for biped walking robot is an important and urgent issue since hired walking robots are ultimately developed not only for researches but to be utilized in real life. In the research, method for moving object detection has been developed for task assignment and execution of biped robot as well as for human robot interaction (HRI) system. But these methods are not suitable to biped walking robot. So, we suggest the advanced method which is suitable to biped walking robot platform. For carrying out certain tasks, an object detecting system using modified optical flow algorithm by wireless vision camera is implemented in a biped walking robot.

  • PDF

A new motion-based segmentation algorithm in image sequences (연속영상에서 motion 기반의 새로운 분할 알고리즘)

  • 정철곤;김중규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.3A
    • /
    • pp.240-248
    • /
    • 2002
  • This paper presents a new motion-based segmentation algorithm of moving objects in image sequences. The procedure toward complete segmentation consists of two steps: pixel labeling and motion segmentation. In the first step, we assign a label to each pixel according to magnitude of velocity vector. And velocity vector is generated by optical flow. And, in the second step, we have modeled motion field as a markov random field for noise canceling and make a segmentation of motion through energy minimization. We have demonstrated the efficiency of the presented method through experimental results.

Simple Image-Separation Method for Measuring Two-Phase Flow of Freely Rising Single Bubble (상승하는 단일 버블 이상유동의 PIV 계측을 위한 영상분리기법)

  • Park Sang-min;Jin Song-wan;Kim Won-tae;Sung Jae-yong;Yoo Jung-Yul
    • 한국가시화정보학회:학술대회논문집
    • /
    • 2002.11a
    • /
    • pp.7-10
    • /
    • 2002
  • A novel two-phase PIV algorithm using a single camera has been proposed, which introduces a method of image-separation into respective phase images, and is applied to freely rising single bubble. Gas bubble, tracer particle and background each have different gray intensity ranges on the same image frame when reflection and dispersion in the phase interface are intrinsically eliminated by optical filters and fluorescent material. Further, the signals of the two phases do not interfere with each other. Gas phase velocities are obtained from the separated bubble image by applying the two-frame PTV. On the other hand, liquid phase velocities are obtained from the tracer particle image by applying the cross-correlation algorithm. Moreover, in order to increase the SNR (signal-to-noise ratio) of the cross-correlation of tracer particle image, image enhancement is employed.

  • PDF

Large-Scale Phase Retrieval via Stochastic Reweighted Amplitude Flow

  • Xiao, Zhuolei;Zhang, Yerong;Yang, Jie
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4355-4371
    • /
    • 2020
  • Phase retrieval, recovering a signal from phaseless measurements, is generally considered to be an NP-hard problem. This paper adopts an amplitude-based nonconvex optimization cost function to develop a new stochastic gradient algorithm, named stochastic reweighted phase retrieval (SRPR). SRPR is a stochastic gradient iteration algorithm, which runs in two stages: First, we use a truncated sample stochastic variance reduction algorithm to initialize the objective function. The second stage is the gradient refinement stage, which uses continuous updating of the amplitude-based stochastic weighted gradient algorithm to improve the initial estimate. Because of the stochastic method, each iteration of the two stages of SRPR involves only one equation. Therefore, SRPR is simple, scalable, and fast. Compared with the state-of-the-art phase retrieval algorithm, simulation results show that SRPR has a faster convergence speed and fewer magnitude-only measurements required to reconstruct the signal, under the real- or complex- cases.

Developemet of noncontact velocity tracking algorithm for 3-dimensional high speed flows using digital image processing technique (디지털 화상처리를 이용한 유동장의 비접촉 3차원 고속류 계측법의 개발)

  • 도덕희
    • Journal of Advanced Marine Engineering and Technology
    • /
    • v.23 no.2
    • /
    • pp.259-269
    • /
    • 1999
  • A new algorithm for measuring 3-D velocity components of high speed flows were developed using a digital image processing technique. The measuring system consists of three CCD cameras an optical instrument called AOM a digital image grabber and a host computer. The images of mov-ing particles arranged spatially on a rotation plate are taken by two or three CCD cameras and are recorderd onto the image grabber or a video tape recoder. The three-dimensionl velocity com-ponents of the particles are automatically obtained by the developed algorithm In order to verify the validity of this technique three-dimensional velocity data sets obtained from a computer simu-lation of a backward facing step flow were used as test data for the algorithm. an uncertainty analysis associated with the present algorithm is systematically evaluated, The present technique is proved to be used as a tookl for the measurement of unsteady three-dimensional fluid flows.

  • PDF

Optical Flow Based Vehicle Counting and Speed Estimation in CCTV Videos (Optical Flow 기반 CCTV 영상에서의 차량 통행량 및 통행 속도 추정에 관한 연구)

  • Kim, Jihae;Shin, Dokyung;Kim, Jaekyung;Kwon, Cheolhee;Byun, Hyeran
    • Journal of Broadcast Engineering
    • /
    • v.22 no.4
    • /
    • pp.448-461
    • /
    • 2017
  • This paper proposes a vehicle counting and speed estimation method for traffic situation analysis in road CCTV videos. The proposed method removes a distortion in the images using Inverse perspective Mapping, and obtains specific region for vehicle counting and speed estimation using lane detection algorithm. Then, we can obtain vehicle counting and speed estimation results from using optical flow at specific region. The proposed method achieves stable accuracy of 88.94% from several CCTV images by regional groups and it totally applied at 106,993 frames, about 3 hours video.