• Title/Summary/Keyword: Depth of information

Search Result 4,416, Processing Time 0.034 seconds

Obstacle Avoidance Method for UAVs using Polar Grid

  • Pant, Sudarshan;Lee, Sangdon
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.8
    • /
    • pp.1088-1098
    • /
    • 2020
  • This paper proposes an obstacle avoidance method using a depth polar grid. Depth information is a crucial factor for determining the safe path for collision-free navigation of unmanned aerial vehicles (UAVs) as it can perceive the distance to the obstacles effectively. However, the existing depth-camera-based approaches for obstacle avoidance require computational y expensive path planning algorithms. We propose a simple navigation method using the polar-grid of the depth information obtained from the camera with narrow field-of-view(FOV). The effectiveness of the approach was validated by a series of experiments using software-in-the-loop simulation in a realistic outdoor environment. The experimental results show that the proposed approach successfully avoids obstacles using a single depth camera with limited FOV.

3D Face Recognition using Local Depth Information

  • 이영학;심재창;이태홍
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.818-825
    • /
    • 2002
  • Depth information is one of the most important factor for the recognition of a digital face image. Range images are very useful, when comparing one face with other faces, because of implicating depth information. As the processing for the whole fare produces a lot of calculations and data, face images ran be represented in terms of a vector of feature descriptors for a local area. In this paper, depth areas of a 3 dimensional(3D) face image were extracted by the contour line from some depth value. These were resampled and stored in consecutive location in feature vector using multiple feature method. A comparison between two faces was made based on their distance in the feature space, using Euclidian distance. This paper reduced the number of index data in the database and used fewer feature vectors than other methods. Proposed algorithm can be highly recognized for using local depth information and less feature vectors or the face.

Statistical Model of Effective Impact Speed based on Vehicle Damages in Case of Rear-End Collisions

  • Kang, Sung-Mo;Kim, Joo-Hwan
    • Journal of the Korean Data and Information Science Society
    • /
    • v.19 no.2
    • /
    • pp.463-473
    • /
    • 2008
  • In this study, we measure damage depth and calculate effective impact speed in case of rear-end collision using real car insurance data. We study the relationship between demage depth and effective impact speed, and present statistical model for these two variables. In our real data study, 3-degree polynomial equation model is better fit to effective impact speed and demage depth than the simple linear model that are estimated in previous other studies. Damage depth is a major factor to see the extent of impact in a car collision, and by using this equation, it is possible to evaluate the severity of driver's injury.

  • PDF

Depth-fused-type Three-dimensional Near-eye Display Using a Birefringent Lens Set

  • Baek, Hogil;Min, Sung-Wook
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.524-529
    • /
    • 2020
  • We propose a depth-fused-type three-dimensional (3D) near-eye display implemented using a birefringent lens set that is made of calcite. By using a birefringent lens and image source (28.70 mm × 21.52 mm), which has different focal lengths according to the polarization state of the incident light, the proposed system can present depth-fused three-dimensional images at 4.6 degrees of field of view (FOV) within 1.6 Diopter (D) to 0.4 D, depending on the polarization distributed depth map. The proposed method can be applied to near-eye displays like head-mounted display systems, for a more natural 3D image without vergence-accommodation conflict.

Detection of Moving Objects using Depth Frame Data of 3D Sensor (3D센서의 Depth frame 데이터를 이용한 이동물체 감지)

  • Lee, Seong-Ho;Han, Kyong-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.5
    • /
    • pp.243-248
    • /
    • 2014
  • This study presents an investigation into the ways to detect the areas of object movement with Kinect's Depth Frame, which is capable of receiving 3D information regardless of external light sources. Applied to remove noises along the boundaries of objects among the depth information received from sensors were the blurring technique for the x and y coordinates of pixels and the frequency filter for the z coordinate. In addition, a clustering filter was applied according to the changing amounts of adjacent pixels to extract the areas of moving objects. It was also designed to detect fast movements above the standard according to filter settings, being applicable to mobile robots. Detected movements can be applied to security systems when being delivered to distant places via a network and can also be expanded to large-scale data through concerned information.

Optical Resonance-based Three Dimensional Sensing Device and its Signal Processing (광공진 현상을 이용한 입체 영상센서 및 신호처리 기법)

  • Park, Yong-Hwa;You, Jang-Woo;Park, Chang-Young;Yoon, Heesun
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2013.10a
    • /
    • pp.763-764
    • /
    • 2013
  • A three-dimensional image capturing device and its signal processing algorithm and apparatus are presented. Three dimensional information is one of emerging differentiators that provides consumers with more realistic and immersive experiences in user interface, game, 3D-virtual reality, and 3D display. It has the depth information of a scene together with conventional color image so that full-information of real life that human eyes experience can be captured, recorded and reproduced. 20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented[1,2]. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical resonator'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation[3,4]. The optical resonator is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image (Figure 1). Suggested novel optical resonator enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously (Figure 2,3). The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical resonator design, fabrication, 3D camera system prototype and signal processing algorithms.

  • PDF

Layered Depth Image Representation And H.264 Encoding of Multi-view video For Free viewpoint TV (자유시점 TV를 위한 다시점 비디오의 계층적 깊이 영상 표현과 H.264 부호화)

  • Shin, Jong Hong
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.7 no.2
    • /
    • pp.91-100
    • /
    • 2011
  • Free viewpoint TV can provide multi-angle view point images for viewer needs. In the real world, But all angle view point images can not be captured by camera. Only a few any angle view point images are captured by each camera. Group of the captured images is called multi-view image. Therefore free viewpoint TV wants to production of virtual sub angle view point images form captured any angle view point images. Interpolation methods are known of this problem general solution. To product interpolated view point image of correct angle need to depth image of multi-view image. Unfortunately, multi-view video including depth image is necessary to develop a new compression encoding technique for storage and transmission because of a huge amount of data. Layered depth image is an efficient representation method of multi-view video data. This method makes a data structure that is synthesis of multi-view color and depth image. This paper proposed enhanced compression method using layered depth image representation and H.264/AVC video coding technology. In experimental results, confirmed high compression performance and good quality reconstructed image.

Real-time Depth Estimation for Visual Serving with Eye-in-Hand Robot (아이인핸드로봇의 영상 추적을 위한 실시간 거리측정)

  • Park, Jong-Cheol;Bien, Zeung-Nam;Ro, Cheol-Rae
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1122-1124
    • /
    • 1996
  • Depth between the robot and the target is an essential information in the robot control. However, in case of eye-in-hand robot with one camera, it is not easy to get an accurate depth information in real-time. In this paper, the techniques of depth-from-motion and depth-from-focus are combined to accomplish the real-time requirement. Integration of the two approaches are accomplished by appropriate use of confidence factors which are evaluated by fuzzy rules. Also a fuzzy logic based calibration technique is proposed.

  • PDF

Human Action Recognition via Depth Maps Body Parts of Action

  • Farooq, Adnan;Farooq, Faisal;Le, Anh Vu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2327-2347
    • /
    • 2018
  • Human actions can be recognized from depth sequences. In the proposed algorithm, we initially construct depth, motion maps (DMM) by projecting each depth frame onto three orthogonal Cartesian planes and add the motion energy for each view. The body part of the action (BPoA) is calculated by using bounding box with an optimal window size based on maximum spatial and temporal changes for each DMM. Furthermore, feature vector is constructed by using BPoA for each human action view. In this paper, we employed an ensemble based learning approach called Rotation Forest to recognize different actions Experimental results show that proposed method has significantly outperforms the state-of-the-art methods on Microsoft Research (MSR) Action 3D and MSR DailyActivity3D dataset.

GPU-Accelerated Single Image Depth Estimation with Color-Filtered Aperture

  • Hsu, Yueh-Teng;Chen, Chun-Chieh;Tseng, Shu-Ming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.3
    • /
    • pp.1058-1070
    • /
    • 2014
  • There are two major ways to implement depth estimation, multiple image depth estimation and single image depth estimation, respectively. The former has a high hardware cost because it uses multiple cameras but it has a simple software algorithm. Conversely, the latter has a low hardware cost but the software algorithm is complex. One of the recent trends in this field is to make a system compact, or even portable, and to simplify the optical elements to be attached to the conventional camera. In this paper, we present an implementation of depth estimation with a single image using a graphics processing unit (GPU) in a desktop PC, and achieve real-time application via our evolutional algorithm and parallel processing technique, employing a compute shader. The methods greatly accelerate the compute-intensive implementation of depth estimation with a single view image from 0.003 frames per second (fps) (implemented in MATLAB) to 53 fps, which is almost twice the real-time standard of 30 fps. In the previous literature, to the best of our knowledge, no paper discusses the optimization of depth estimation using a single image, and the frame rate of our final result is better than that of previous studies using multiple images, whose frame rate is about 20fps.