• Title/Summary/Keyword: space target detection

Search Result 110, Processing Time 0.033 seconds

2D Human Pose Estimation based on Object Detection using RGB-D information

  • Park, Seohee;Ji, Myunggeun;Chun, Junchul
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.800-816
    • /
    • 2018
  • In recent years, video surveillance research has been able to recognize various behaviors of pedestrians and analyze the overall situation of objects by combining image analysis technology and deep learning method. Human Activity Recognition (HAR), which is important issue in video surveillance research, is a field to detect abnormal behavior of pedestrians in CCTV environment. In order to recognize human behavior, it is necessary to detect the human in the image and to estimate the pose from the detected human. In this paper, we propose a novel approach for 2D Human Pose Estimation based on object detection using RGB-D information. By adding depth information to the RGB information that has some limitation in detecting object due to lack of topological information, we can improve the detecting accuracy. Subsequently, the rescaled region of the detected object is applied to ConVol.utional Pose Machines (CPM) which is a sequential prediction structure based on ConVol.utional Neural Network. We utilize CPM to generate belief maps to predict the positions of keypoint representing human body parts and to estimate human pose by detecting 14 key body points. From the experimental results, we can prove that the proposed method detects target objects robustly in occlusion. It is also possible to perform 2D human pose estimation by providing an accurately detected region as an input of the CPM. As for the future work, we will estimate the 3D human pose by mapping the 2D coordinate information on the body part onto the 3D space. Consequently, we can provide useful human behavior information in the research of HAR.

Autonomous Surveillance-tracking System for Workers Monitoring (작업자 모니터링을 위한 자동 감시추적 시스템)

  • Ko, Jung-Hwan;Lee, Jung-Suk;An, Young-Hwan
    • 전자공학회논문지 IE
    • /
    • v.47 no.2
    • /
    • pp.38-46
    • /
    • 2010
  • In this paper, an autonomous surveillance-tracking system for Workers monitoring basing on the stereo vision scheme is proposed. That is, analysing the characteristics of the cross-axis camera system through some experiments, a optimized stereo vision system is constructed and using this system an intelligent worker surveillance-tracking system is implemented, in which a target worker moving through the environments can be detected and tracked, and its resultant stereo location coordinates and moving trajectory in the world space also can be extracted. From some experiments on moving target surveillance-tracking, it is analyzed that the target's center location after being tracked is kept to be very low error ratio of 1.82%, 1.11% on average in the horizontal and vertical directions, respectively. And, the error ratio between the calculation and measurement values of the 3D location coordinates of the target person is found to be very low value of 2.5% for the test scenario on average. Accordingly, in this paper, a possibility of practical implementation of the intelligent stereo surveillance system for real-time tracking of a target worker moving through the environments and robust detection of the target's 3D location coordinates and moving trajectory in the real world is finally suggested.

Drone Obstacle Avoidance Algorithm using Camera-based Reinforcement Learning (카메라 기반 강화학습을 이용한 드론 장애물 회피 알고리즘)

  • Jo, Si-hun;Kim, Tae-Young
    • Journal of the Korea Computer Graphics Society
    • /
    • v.27 no.5
    • /
    • pp.63-71
    • /
    • 2021
  • Among drone autonomous flight technologies, obstacle avoidance is a very important technology that can prevent damage to drones or surrounding environments and prevent danger. Although the LiDAR sensor-based obstacle avoidance method shows relatively high accuracy and is widely used in recent studies, it has disadvantages of high unit price and limited processing capacity for visual information. Therefore, this paper proposes an obstacle avoidance algorithm for drones using camera-based PPO(Proximal Policy Optimization) reinforcement learning, which is relatively inexpensive and highly scalable using visual information. Drone, obstacles, target points, etc. are randomly located in a learning environment in the three-dimensional space, stereo images are obtained using a Unity camera, and then YOLov4Tiny object detection is performed. Next, the distance between the drone and the detected object is measured through triangulation of the stereo camera. Based on this distance, the presence or absence of obstacles is determined. Penalties are set if they are obstacles and rewards are given if they are target points. The experimennt of this method shows that a camera-based obstacle avoidance algorithm can be a sufficiently similar level of accuracy and average target point arrival time compared to a LiDAR-based obstacle avoidance algorithm, so it is highly likely to be used.

Development of Hardware Design Process Enhancement Tool for Flight Control Computer using Modeling and Simulation (M&S 기반의 비행조종컴퓨터 하드웨어 설계 프로세스 개선을 위한 툴 개발)

  • Kwon, Jong-Kwang;Ahn, Jong-Min;Ko, Joon-Soo;Seung, Dae-Beom;Kim, Whan-Woo
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.35 no.11
    • /
    • pp.1036-1042
    • /
    • 2007
  • It is rather difficult to improve flight control computer(FLCC) hardware(H/W) development schedule due to lack of commercial off-the-self(COTS) tools or target specific tools. Thus, it is suggested to develop an enhanced process utilizing modeling, simulation and virtual reality tools. This paper presents H/W design process enhancement tool(PET) for FLCC design requirements such as FLCC input/output(I/O) signal flow, I/O fault detection, failure management algorithm, circuit logic, PCB assembly configuration and installation utilizing simulation and visualization in virtual space. New tool will provide simulation capability of various FLCC design configuration including shop replaceable unit(SRU) level assembly/dis-assembly utilizing open flight format 3-D modeling data.

A Study on the Synthetic Aperture Radar Processor using AOD/CCD (AOD/CCD를 이용한 합성개구면 레이다 처리기에 관한 연구)

  • 박기환;이영훈;이영국;은재정;박한규
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.19 no.10
    • /
    • pp.1957-1964
    • /
    • 1994
  • In this thesis, a Synthetic Aperture Rarar Processor that is possible real-time handling is implemented using CW(Continuose Wave) laser as a light source, CCD(charge Coupled Device) as a time integrator, and AOD(Acousto-Optic Device) as the space integrator. One of the advantages of the proposed system is that it does not require driving circuits of the light source. To implement the system, the linear frequency modulation(chirp) technique has been used for radar signal. The received data for the unit target was processed using 7.80 board and accompanying electronic circuits. In order to reduce the smear effect of the focused chirp signal which occurs Bragg diffrection angle of the AOD has been utilized to make sharp pulses of the laser source, and the pulse made synchronized with the chirp signal. Experiment and analysis results of the data and images detected from CCD of the proposed SAR system demonstrated that detection effect is degrated as the unit target distance increases, and the resolving power is improved as the bandwidth of the chirp signal increases. Also, as the pulse width of the light source decreases, the smear effect has been reduced. The experimental results assured that the proposed system in this papre can be used as a real time SAR processor.

  • PDF

Loitering Behavior Detection Using Shadow Removal and Chromaticity Histogram Matching (그림자 제거와 색도 히스토그램 비교를 이용한 배회행위 검출)

  • Park, Eun-Soo;Lee, Hyung-Ho;Yun, Myoung-Kyu;Kim, Min-Gyu;Kwak, Jong-Hoon;Kim, Hak-Il
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.171-181
    • /
    • 2011
  • Proposed in this paper is the intelligent video surveillance system to effectively detect multiple loitering objects even that disappear from the out of camera's field of view and later return to a target zone. After the background and foreground are segmented using Gaussian mixture model and shadows are removed, the objects returning to the target zone is recognized using the chromaticity histogram and the duration of loitering is preserved. For more accurate measurement of the loitering behavior, the camera calibration is also applied to map the image plane to the real-world ground. Hence, the loitering behavior can be detected by considering the time duration of the object's existence in the real-world space. The experiment was performed using loitering video and all of the loitering behaviors are accurately detected.

Hiding Shellcode in the 24Bit BMP Image (24Bit BMP 이미지를 이용한 쉘코드 은닉 기법)

  • Kum, Young-Jun;Choi, Hwa-Jae;Kim, Huy-Kang
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.22 no.3
    • /
    • pp.691-705
    • /
    • 2012
  • Buffer overflow vulnerability is the most representative one that an attack method and its countermeasure is frequently developed and changed. This vulnerability is still one of the most critical threat since it was firstly introduced in middle of 1990s. Shellcode is a machine code which can be used in buffer overflow attack. Attackers make the shellcode for their own purposes and insert it into target host's memory space, then manipulate EIP(Extended Instruction Pointer) to intercept control flow of the target host system. Therefore, a lot of research to defend have been studied, and attackers also have done many research to bypass security measures designed for the shellcode defense. In this paper, we investigate shellcode defense and attack techniques briefly and we propose our new methodology which can hide shellcode in the 24bit BMP image. With this proposed technique, we can easily hide any shellcode executable and we can bypass the current detection and prevention techniques.

Target-free vision-based approach for vibration measurement and damage identification of truss bridges

  • Dong Tan;Zhenghao Ding;Jun Li;Hong Hao
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.421-436
    • /
    • 2023
  • This paper presents a vibration displacement measurement and damage identification method for a space truss structure from its vibration videos. Features from Accelerated Segment Test (FAST) algorithm is combined with adaptive threshold strategy to detect the feature points of high quality within the Region of Interest (ROI), around each node of the truss structure. Then these points are tracked by Kanade-Lucas-Tomasi (KLT) algorithm along the video frame sequences to obtain the vibration displacement time histories. For some cases with the image plane not parallel to the truss structural plane, the scale factors cannot be applied directly. Therefore, these videos are processed with homography transformation. After scale factor adaptation, tracking results are expressed in physical units and compared with ground truth data. The main operational frequencies and the corresponding mode shapes are identified by using Subspace Stochastic Identification (SSI) from the obtained vibration displacement responses and compared with ground truth data. Structural damages are quantified by elemental stiffness reductions. A Bayesian inference-based objective function is constructed based on natural frequencies to identify the damage by model updating. The Success-History based Adaptive Differential Evolution with Linear Population Size Reduction (L-SHADE) is applied to minimise the objective function by tuning the damage parameter of each element. The locations and severities of damage in each case are then identified. The accuracy and effectiveness are verified by comparison of the identified results with the ground truth data.

Assessment of a smartphone-based monitoring system and its application

  • Ahn, Hoyong;Choi, Chuluong;Yu, Yeon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.3
    • /
    • pp.383-397
    • /
    • 2014
  • Information technology advances are allowing conventional surveillance systems to be combined with mobile communication technologies, creating ubiquitous monitoring systems. This paper proposes monitoring system that uses smart camera technology. We discuss the dependence of interior orientation parameters on calibration target sheets and compare the accuracy of a three-dimensional monitoring system with camera location calculated by space resectioning using a Digital Surface Model (DSM) generated from stereo images. A monitoring housing is designed to protect a camera from various weather conditions and to provide the camera for power generated from solar panel. A smart camera is installed in the monitoring housing. The smart camera is operated and controlled through an Android application. At last the accuracy of a three-dimensional monitoring system is evaluated using a DSM. The proposed system was then tested against a DSM created from ground control points determined by Global Positioning Systems (GPSs) and light detection and ranging data. The standard deviation of the differences between DSMs are less than 0.12 m. Therefore the monitoring system is appropriate for extracting the information of objects' position and deformation as well as monitoring them. Through incorporation of components, such as camera housing, a solar power supply, the smart camera the system can be used as a ubiquitous monitoring system.

Distributed Search of Swarm Robots Using Tree Structure in Unknown Environment (미지의 환경에서 트리구조를 이용한 군집로봇의 분산 탐색)

  • Lee, Gi Su;Joo, Young Hoon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.2
    • /
    • pp.285-292
    • /
    • 2018
  • In this paper, we propose a distributed search of a cluster robot using tree structure in an unknown environment. In the proposed method, the cluster robot divides the unknown environment into 4 regions by using the LRF (Laser Range Finder) sensor information and divides the maximum detection distance into 4 regions, and detects feature points of the obstacle. Also, we define the detected feature points as Voronoi Generators of the Voronoi Diagram and apply the Voronoi diagram. The Voronoi Space, the Voronoi Partition, and the Voronoi Vertex, components of Voronoi, are created. The generated Voronoi partition is the path of the robot. Voronoi vertices are defined as each node and consist of the proposed tree structure. The root of the tree is the starting point, and the node with the least significant bit and no children is the target point. Finally, we demonstrate the superiority of the proposed method through several simulations.