• Title/Summary/Keyword: object motion

Search Result 1,044, Processing Time 0.027 seconds

A Hierarchical Semantic Video Object Tracking Algorithm Using Watershed Algorithm (Watershed 알고리즘을 사용한 계층적 이동체 추적 알고리즘)

  • 이재연;박현상;나종범
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.24 no.10B
    • /
    • pp.1986-1994
    • /
    • 1999
  • In this paper, a semi-automatic approach is adopted to extract a semantic object from real-world video sequences human-aided segmentation for the first frame and automatic tracking for the remaining frames. The proposed algorithm has a hierarchical structure using watershed algorithm. Each hierarchy consists of 3 basic steps: First, seeds are extracted from the simplified current frame. Second, region growing bv a modified watershed algorithm is performed to get over-segmented regions. Finally, the segmented regions are classified into 3 categories, i.e., inside, outside or uncertain regions according to region probability values, which are acquired by the probability map calculated from an estimated motion-vector field. Then, for the remaining uncertain regions, the above 3 steps are repeated at lower hierarchies with less simplified frames until every region is classified into a certain region. The proposed algorithm provides prospective results in studio-quality sequences such as 'Claire', 'Miss America', 'Akiyo', and 'Mother and daughter'.

  • PDF

Risk free zone study for cylindrical objects dropped into the water

  • Xiang, Gong;Birk, Lothar;Li, Linxiong;Yu, Xiaochuan;Luo, Yong
    • Ocean Systems Engineering
    • /
    • v.6 no.4
    • /
    • pp.377-400
    • /
    • 2016
  • Dropped objects are among the top ten causes of fatalities and serious injuries in the oil and gas industry (DORIS, 2016). Objects may accidentally fall down from platforms or vessels during lifting or any other offshore operation. Proper planning of lifting operations requires the knowledge of the risk-free zone on the sea bed to protect underwater structures and equipment. To this end a three-dimensional (3D) theory of dynamic motion of dropped cylindrical object is expanded to also consider ocean currents. The expanded theory is integrated into the authors' Dropped Objects Simulator (DROBS). DROBS is utilized to simulate the trajectories of dropped cylinders falling through uniform currents originating from different directions (incoming angle at $0^{\circ}$, $90^{\circ}$, $180^{\circ}$, and $270^{\circ}$). It is found that trajectories and landing points of dropped cylinders are greatly influenced by the direction of current. The initial conditions after the cylinders have fallen into the water are treated as random variables. It is assumed that the corresponding parameters orientation angle, translational velocity, and rotational velocity follow normal distributions. The paper presents results of DROBS simulations for the case of a dropped cylinder with initial drop angle at $60^{\circ}$ through air-water columns without current. Then the Monte Carlo simulations are used for predicting the landing point distributions of dropped cylinders with varying drop angles under current. The resulting landing point distribution plots may be used to identify risk free zones for offshore lifting operations.

Design and Implementation of Flying-object Tracking Management System by using Radar Data (레이더 자료를 이용한 항적추적관리시스템 설계 및 구현)

  • Lee Moo-Eun;Ryu Keun-Ho
    • The KIPS Transactions:PartD
    • /
    • v.13D no.2 s.105
    • /
    • pp.175-182
    • /
    • 2006
  • Radars are used to detect the motion of the low flying enemy planes in the military. Radar-detected raw data are first processed and then inserted into the ground tactical C4I system. Next, these data we analyzed and broadcasted to the Shooter system in real time. But the accuracy of information and time spent on the displaying and graphical computation are dependent on the operator's capability. In this paper, we propose the Flying Object Tracking Management System that allows the displaying of the objects' trails in real time by using data received from the radars. We apply the coordinate system translation algorithm, existing communication protocol improvements with communication equipment, and signal and information computation process. Especially, radar signal duplication computation and synchronization algorithm is developed to display the objects' coordinates and thus we can improve the Tactical Air control system's reliability, efficiency, and easy-of-usage.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

A study on an error recovery expert system in the advanced teleoperator system (지적 원격조작시스템의 일환으로서 에러회복 전문가 시스템에 관한 연구)

  • 이순요;염준규;오제상;이창민
    • Journal of the Ergonomics Society of Korea
    • /
    • v.6 no.2
    • /
    • pp.19-28
    • /
    • 1987
  • If an error occurs in the automatic mode when the advanced teleoperator system performs a task in hostile environment, then the mode changes into the manual mode. The operation by program and the operation by hyman recover the error in the manual mode. The system resumew the automatic mode and continues the given task. In order to utilize the inverse kinematics as means of the operation by program in the manual mode, Lee and Nagamachi determined the end point of the robot trajectory planning which varied with the height of the task object recognized by a T.V monitor, solved the end point by the fuzzy set theory, and controlled the position of the robot hand by the inverse kinematics and the posture of the robot hand by the operation by human. But the operation by human did take a lot of task time because the position and the posture of the robot hand were separately controlled. To reduce the task time by human, this paper developes an error recovery expert system (ERES). The position of the robot hand is controlled by the inverse kinematics of the cartesian coordinate system to the end point which is deter- mined by the fuzzy set theory. The posture of the robot hand is controlled by the modulality of the robot hand's motion which is made by the posture of the task object. The knowledge base and the inference engine of the ERES is developed using the muLISP-86 language. The experimental results show that the average task time by human the ERES which was performed by the integration of the position and the posture control of the robot hand is shorter than that of the research, done by the preliminary experiment, which was performed by the separation of the position and the posture control of the robot hand. A further study is likely to research into an even more intelligent robot system control usint a superimposed display and digitizer which can present two-dimensional coordinate of the work space for the convenience of human interaction.

  • PDF

Gait Recognition Using Multiple Feature detection (다중 특징점 검출을 이용한 보행인식)

  • Cho, Woon;Kim, Dong-Hyeon;Paik, Joon-Ki
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.84-92
    • /
    • 2007
  • The gait recognition is presented for human identification from a sequence of noisy silhouettes segmented from video by capturing at a distance. The proposed gait recognition algorithm gives better performance than the baseline algorithm because of segmentation of the object by using multiple modules; i) motion detection, ii) object region detection, iii) head detection, and iv) active shape models, which solve the baseline algorithm#s problems to make background, to remove shadow, and to be better recognition rates. For the experiment, we used the HumanID Gait Challenge data set, which is the largest gait benchmarking data set with 122 objects, For realistic simulation we use various values for the following parameters; i) viewpoint, ii) shoe, iii) surface, iv) carrying condition, and v) time.

Security Framework for Intelligent Predictive Surveillance Systems (지능형 예측감시 시스템을 위한 보안 프레임워크)

  • Park, Jeonghun;Park, Namje
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.3
    • /
    • pp.77-83
    • /
    • 2020
  • Recently, intelligent predictive surveillance system has emerged. It is a system that can probabilistically predict the future situation and event based on the existing data beyond the scope of the current object or object motion and situation recognition. Since such intelligent predictive monitoring system has a high possibility of handling personal information, security consideration is essential for protecting personal information. The existing video surveillance framework has limitations in terms of privacy. In this paper, we proposed a security framework for intelligent predictive surveillance system. In the proposed method, detailed components for each unit are specified by dividing them into terminals, transmission, monitoring, and monitoring layers. In particular, it supports active personal information protection in the video surveillance process by supporting detailed access control and de-identification.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Manipulator with Camera for Mobile Robots (모바일 로봇을 위한 카메라 탑재 매니퓰레이터)

  • Lee Jun-Woo;Choe, Kyoung-Geun;Cho, Hun-Hee;Jeong, Seong-Kyun;Bong, Jae-Hwan
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.507-514
    • /
    • 2022
  • Mobile manipulators are getting lime light in the field of home automation due to their mobility and manipulation capabilities. In this paper, we developed a small size manipulator system that can be mounted on a mobile robot as a preliminary study to develop a mobile manipulator. The developed manipulator has four degree-of-freedom. At the end-effector of manipulator, there are a camera and a gripper to recognize and manipulate the object. One of four degree-of-freedom is linear motion in vertical direction for better interaction with human hands which are located higher than the mobile manipulator. The developed manipulator was designed to dispose the four actuators close to the base of the manipulator to reduce rotational inertia of the manipulator, which improves stability of manipulation and reduces the risk of rollover. The developed manipulator repeatedly performed a pick and place task and successfully manipulate the object within the workspace of manipulator.

LiDAR Static Obstacle Map based Vehicle Dynamic State Estimation Algorithm for Urban Autonomous Driving (도심자율주행을 위한 라이다 정지 장애물 지도 기반 차량 동적 상태 추정 알고리즘)

  • Kim, Jongho;Lee, Hojoon;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.14-19
    • /
    • 2021
  • This paper presents LiDAR static obstacle map based vehicle dynamic state estimation algorithm for urban autonomous driving. In an autonomous driving, state estimation of host vehicle is important for accurate prediction of ego motion and perceived object. Therefore, in a situation in which noise exists in the control input of the vehicle, state estimation using sensor such as LiDAR and vision is required. However, it is difficult to obtain a measurement for the vehicle state because the recognition sensor of autonomous vehicle perceives including a dynamic object. The proposed algorithm consists of two parts. First, a Bayesian rule-based static obstacle map is constructed using continuous LiDAR point cloud input. Second, vehicle odometry during the time interval is calculated by matching the static obstacle map using Normal Distribution Transformation (NDT) method. And the velocity and yaw rate of vehicle are estimated based on the Extended Kalman Filter (EKF) using vehicle odometry as measurement. The proposed algorithm is implemented in the Linux Robot Operating System (ROS) environment, and is verified with data obtained from actual driving on urban roads. The test results show a more robust and accurate dynamic state estimation result when there is a bias in the chassis IMU sensor.