• Title/Summary/Keyword: Depth Map Extraction

Search Result 64, Processing Time 0.031 seconds

A study on correspondence problem of stereo vision system using self-organized neural network

  • Cho, Y.B.;Gweon, D.G.
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.10 no.4
    • /
    • pp.170-179
    • /
    • 1993
  • In this study, self-organized neural network is used to solve the vorrespondence problem of the axial stereo image. Edge points are extracted from a pair of stereo images and then the edge points of rear image are assined to the output nodes of neural network. In the matching process, the two input nodes of neural networks are supplied with the coordi- nates of the edge point selected randomly from the front image. This input data activate optimal output node and its neighbor nodes whose coordinates are thought to be correspondence point for the present input data, and then their weights are allowed to updated. After several iterations of updating, the weights whose coordinates represent rear edge point are converged to the coordinates of the correspondence points in the front image. Because of the feature map properties of self-organized neural network, noise-free and smoothed depth data can be achieved.

  • PDF

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

An Efficient Object Extraction Scheme for Low Depth-of-Field Images (낮은 피사계 심도 영상에서 관심 물체의 효율적인 추출 방법)

  • Park Jung-Woo;Lee Jae-Ho;Kim Chang-Ick
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.9
    • /
    • pp.1139-1149
    • /
    • 2006
  • This paper describes a novel and efficient algorithm, which extracts focused objects from still images with low depth-of-field (DOF). The algorithm unfolds into four modules. In the first module, a HOS map, in which the spatial distribution of the high-frequency components is represented, is obtained from an input low DOF image [1]. The second module finds OOI candidate by using characteristics of the HOS. Since it is possible to contain some holes in the region, the third module detects and fills them. In order to obtain an OOI, the last module gets rid of background pixels in the OOI candidate. The experimental results show that the proposed method is highly useful in various applications, such as image indexing for content-based retrieval from huge amounts of image database, image analysis for digital cameras, and video analysis for virtual reality, immersive video system, photo-realistic video scene generation and video indexing system.

  • PDF

Rotation Invariant 3D Star Skeleton Feature Extraction (회전무관 3D Star Skeleton 특징 추출)

  • Chun, Sung-Kuk;Hong, Kwang-Jin;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.10
    • /
    • pp.836-850
    • /
    • 2009
  • Human posture recognition has attracted tremendous attention in ubiquitous environment, performing arts and robot control so that, recently, many researchers in pattern recognition and computer vision are working to make efficient posture recognition system. However the most of existing studies is very sensitive to human variations such as the rotation or the translation of body. This is why the feature, which is extracted from the feature extraction part as the first step of general posture recognition system, is influenced by these variations. To alleviate these human variations and improve the posture recognition result, this paper presents 3D Star Skeleton and Principle Component Analysis (PCA) based feature extraction methods in the multi-view environment. The proposed system use the 8 projection maps, a kind of depth map, as an input data. And the projection maps are extracted from the visual hull generation process. Though these data, the system constructs 3D Star Skeleton and extracts the rotation invariant feature using PCA. In experimental result, we extract the feature from the 3D Star Skeleton and recognize the human posture using the feature. Finally we prove that the proposed method is robust to human variations.

An Efficient Pedestrian Recognition Method based on PCA Reconstruction and HOG Feature Descriptor (PCA 복원과 HOG 특징 기술자 기반의 효율적인 보행자 인식 방법)

  • Kim, Cheol-Mun;Baek, Yeul-Min;Kim, Whoi-Yul
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.162-170
    • /
    • 2013
  • In recent years, the interests and needs of the Pedestrian Protection System (PPS), which is mounted on the vehicle for the purpose of traffic safety improvement is increasing. In this paper, we propose a pedestrian candidate window extraction and unit cell histogram based HOG descriptor calculation methods. At pedestrian detection candidate windows extraction stage, the bright ratio of pedestrian and its circumference region, vertical edge projection, edge factor, and PCA reconstruction image are used. Dalal's HOG requires pixel based histogram calculation by Gaussian weights and trilinear interpolation on overlapping blocks, But our method performs Gaussian down-weight and computes histogram on a per-cell basis, and then the histogram is combined with the adjacent cell, so our method can be calculated faster than Dalal's method. Our PCA reconstruction error based pedestrian detection candidate window extraction method efficiently classifies background based on the difference between pedestrian's head and shoulder area. The proposed method improves detection speed compared to the conventional HOG just using image without any prior information from camera calibration or depth map obtained from stereo cameras.

Extraction of Seafloor Topographic Information Using Multi-Beam Echo Sounder (다중빔 음향측심기를 이용한 해저 지형정보 추출)

  • Yong Jin CHOI;Jae Bin LEE;Jin Duk LEE
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.27 no.3
    • /
    • pp.30-42
    • /
    • 2024
  • In this paper, we presented the processing process of the sea floor mapping system using multi-beam echo-sounding data through actual measurements and the results of processing the multi-beam echo-sounding data obtained by exploring some waters of Yeosu Bay. Simultaneously and continuously observe the location and water depth of the sea using GNSS and multi-beam echo sounder, synchronization of the two data, depth correction process considering the tide level at the time of observation, 3D model of the seafloor, contour map, and longitudinal and cross-section data of the seafloor topography. In addition, by extracting efficiently the dredging volume according to the dredging area and planned water depth required for dredging construction management of submarine projects, it can be used for maintenance and management of marine construction sites and ports.

Water Quality Elements Extraction of Lake by the Landsat TM Images (Landsat TM 영상에 의한 호수의 수질인자 추출)

  • 최승필;양인태
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.16 no.2
    • /
    • pp.225-233
    • /
    • 1998
  • It is necessary to check the water quality of the lake on a continuous basis to determine the appearance of water pollution; however, it not only takes much time and expenses but it is considerably difficult to investigate the wide range of the area. If we use the remote sensing technique through the use of satellites, the status of water quality can be checked covering many wide areas simultaneously; and because the same area can be measured on a periodic basis, it is extremely effective in investigating the water quality. Furthermore, as some of the Landsat sensors carry characteristics which sense objects according to wave length, the distribution of water quality can be checked relatively accurately within a short period of time, while its image can be displayed in color. Hence, this research has attempted to extract water quality elements, such as transparency, water depth, and surface water temperature by utilizing the satellite data, and has prepared the water quality distribution image map of the Lake Hwajinpo by presenting the related empirical formula of the water quality elements. If the water quality distribution image map is prepared after extracting the water quality elements from the DN of the Landsat TM image and then carrying out TIN analysis through the use of GIS, relatively more accurate pattern can be learned covering a wide rage of area than the pattern presented based on the value obtained from actual observation.

  • PDF

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

3D Model Reconstruction Algorithm Using a Focus Measure Based on Higher Order Statistics (고차 통계 초점 척도를 이용한 3D 모델 복원 알고리즘)

  • Lee, Joo-Hyun;Yoon, Hyeon-Ju;Han, Kyu-Phil
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.11-18
    • /
    • 2013
  • This paper presents a SFF(shape from focus) algorithm using a new focus measure based on higher order statistics for the exact depth estimation. Since conventional SFF-based 3D depth reconstruction algorithms used SML(sum of modified Laplacian) as the focus measure, their performance is strongly depended on the image characteristics. These are efficient only for the rich texture and well focused images. Therefore, this paper adopts a new focus measure using HOS(higher order statistics), in order to extract the focus value for relatively poor texture and focused images. The initial best focus area map is generated by the measure. Thereafter, the area refinement, thinning, and corner detection methods are successively applied for the extraction of the locally best focus points. Finally, a 3D model from the carefully selected points is reconstructed by Delaunay triangulation.

Extraction of Water Depth in Coastal Area Using EO-1 Hyperion Imagery (EO-1 Hyperion 영상을 이용한 연안해역의 수심 추출)

  • Seo, Dong-Ju;Kim, Jin-Soo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.4
    • /
    • pp.716-723
    • /
    • 2008
  • With rapid development of science and technology and recent widening of mankind's range of activities, development of coastal waters and the environment have emerged as global issues. In relation to this, to allow more extensive analyses, the use of satellite images has been on the increase. This study aims at utilizing hyperspectral satellite images in determining the depth of coastal waters more efficiently. For this purpose, a partial image of the research subject was first extracted from an EO-1 Hyperion satellite image, and atmospheric and geometric corrections were made. Minimum noise fraction (MNF) transformation was then performed to compress the bands, and the band most suitable for analyzing the characteristics of the water body was selected. Within the chosen band, the diffuse attenuation coefficient Kd was determined. By deciding the end-member of pixels with pure spectral properties and conducting mapping based on the linear spectral unmixing method, the depth of water at the coastal area in question was ultimately determined. The research findings showed the calculated depth of water differed by an average of 1.2 m from that given on the digital sea map; the errors grew larger when the water to be measured was deeper. If accuracy in atmospheric correction, end-member determination, and Kd calculation is enhanced in the future, it will likely be possible to determine water depths more economically and efficiently.