• Title/Summary/Keyword: Feature Acquisition

Search Result 167, Processing Time 0.023 seconds

Analysis of Relationships between Features Extracted from SAR Data and Land-cover Classes (SAR 자료에서 추출한 특징들과 토지 피복 항목 사이의 연관성 분석)

  • Park, No-Wook;Chi, Kwang-Hoon;Lee, Hoon-Yol
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.4
    • /
    • pp.257-272
    • /
    • 2007
  • This paper analyzed relationships between various features from SAR data with multiple acquisition dates and mode (frequency, polarization and incidence angles), and land-cover classes. Two typical types of features were extracted by considering acquisition conditions of currently available SAR data. First, coherence, temporal variability and principal component transform-based features were extracted from multi-temporal and single mode SAR data. C-band ERS-1/2, ENVISAT ASAR and Radarsat-1, and L-band JERS-1 SAR data were used for those features and different characteristics of different SAR sensor data were discussed in terms of land-cover discrimination capability. Overall, tandem coherence showed the best discrimination capability among various features. Long-term coherence from C-band SAR data provided a useful information on the discrimination of urban areas from other classes. Paddy fields showed the highest temporal variability values in all SAR sensor data. Features from principal component transform contained particular information relevant to specific land-cover class. As features for multiple mode SAR data acquired at similar dates, polarization ratio and multi-channel variability were also considered. VH/VV polarization ratio was a useful feature for the discrimination of forest and dry fields in which the distributions of coherence and temporal variability were significantly overlapped. It would be expected that the case study results could be useful information on improvement of classification accuracy in land-cover classification with SAR data, provided that the main findings of this paper would be confirmed by extensive case studies based on multi-temporal SAR data with various modes and ground-based SAR experiments.

Analysis of Three Dimensional Positioning Accuracy of Vectorization Using UAV-Photogrammetry (무인항공사진측량을 이용한 벡터화의 3차원 위치정확도 분석)

  • Lee, Jae One;Kim, Doo Pyo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.6
    • /
    • pp.525-533
    • /
    • 2019
  • There are two feature collection methods in digital mapping using the UAV (Unmanned Aerial Vehicle) Photogrammetry: vectorization and stereo plotting. In vectorization, planar information is extracted from orthomosaics and elevation value obtained from a DSM (Digital Surface Model) or a DEM (Digital Elevation Model). However, the exact determination of the positional accuracy of 3D features such as ground facilities and buildings is very ambiguous, because the accuracy of vectorizing results has been mainly analyzed using only check points placed on the ground. Thus, this study aims to review the possibility of 3D spatial information acquisition and digital map production of vectorization by analyzing the corner point coordinates of different layers as well as check points. To this end, images were taken by a Phantom 4 (DJI) with 3.6 cm of GSD (Ground Sample Distance) at altitude of 90 m. The outcomes indicate that the horizontal RMSE (Root Mean Square Error) of vectorization method is 0.045 cm, which was calculated from residuals at check point compared with those of the field survey results. It is therefore possible to produce a digital topographic (plane) map of 1:1,000 scale using ortho images. On the other hand, the three-dimensional accuracy of vectorization was 0.068~0.162 m in horizontal and 0.090~1.840 m in vertical RMSE. It is thus difficult to obtain 3D spatial information and 1:1,000 digital map production by using vectorization due to a large error in elevation.

Learning-based Detection of License Plate using SIFT and Neural Network (SIFT와 신경망을 이용한 학습 기반 차량 번호판 검출)

  • Hong, Won Ju;Kim, Min Woo;Oh, Il-Seok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.187-195
    • /
    • 2013
  • Most of former studies for car license plate detection restrict the image acquisition environment. The aim of this research is to diminish the restrictions by proposing a new method of using SIFT and neural network. SIFT can be used in diverse situations with less restriction because it provides size- and rotation-invariance and large discriminating power. SIFT extracted from the license plate image is divided into the internal(inside class) and the external(outside class) ones and the classifier is trained using them. In the proposed method, by just putting the various types of license plates, the trained neural network classifier can process all of the types. Although the classification performance is not high, the inside class appears densely over the plate region and sparsely over the non-plate regions. These characteristics create a local feature map, from which we can identify the location with the global maximum value as a candidate of license plate region. We collected image database with much less restriction than the conventional researches. The experiment and evaluation were done using this database. In terms of classification accuracy of SIFT keypoints, the correct recognition rate was 97.1%. The precision rate was 62.0% and recall rate was 50.2%. In terms of license plate detection rate, the correct recognition rate was 98.6%.

The Design of Feature Selecting Algorithm for Sleep Stage Analysis (수면단계 분석을 위한 특징 선택 알고리즘 설계)

  • Lee, JeeEun;Yoo, Sun K.
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.10
    • /
    • pp.207-216
    • /
    • 2013
  • The aim of this study is to design a classifier for sleep stage analysis and select important feature set which shows sleep stage well based on physiological signals during sleep. Sleep has a significant effect on the quality of human life. When people undergo lack of sleep or sleep-related disease, they are likely to reduced concentration and cognitive impairment affects, etc. Therefore, there are a lot of research to analyze sleep stage. In this study, after acquisition physiological signals during sleep, we do pre-processing such as filtering for extracting features. The features are used input for the new combination algorithm using genetic algorithm(GA) and neural networks(NN). The algorithm selects features which have high weights to classify sleep stage. As the result of this study, accuracy of the algorithm is up to 90.26% with electroencephalography(EEG) signal and electrocardiography(ECG) signal, and selecting features are alpha and delta frequency band power of EEG signal and standard deviation of all normal RR intervals(SDNN) of ECG signal. We checked the selected features are well shown that they have important information to classify sleep stage as doing repeating the algorithm. This research could use for not only diagnose disease related to sleep but also make a guideline of sleep stage analysis.

Accurate Camera Calibration Method for Multiview Stereoscopic Image Acquisition (다중 입체 영상 획득을 위한 정밀 카메라 캘리브레이션 기법)

  • Kim, Jung Hee;Yun, Yeohun;Kim, Junsu;Yun, Kugjin;Cheong, Won-Sik;Kang, Suk-Ju
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.919-927
    • /
    • 2019
  • In this paper, we propose an accurate camera calibration method for acquiring multiview stereoscopic images. Generally, camera calibration is performed by using checkerboard structured patterns. The checkerboard pattern simplifies feature point extraction process and utilizes previously recognized lattice structure, which results in the accurate estimation of relations between the point on 2-dimensional image and the point on 3-dimensional space. Since estimation accuracy of camera parameters is dependent on feature matching, accurate detection of checkerboard corner is crucial. Therefore, in this paper, we propose the method that performs accurate camera calibration method through accurate detection of checkerboard corners. Proposed method detects checkerboard corner candidates by utilizing 1-dimensional gaussian filters with succeeding corner refinement process to remove outliers from corner candidates and accurately detect checkerboard corners in sub-pixel unit. In order to verify the proposed method, we check reprojection errors and camera location estimation results to confirm camera intrinsic parameters and extrinsic parameters estimation accuracy.

Building Dataset of Sensor-only Facilities for Autonomous Cooperative Driving

  • Hyung Lee;Chulwoo Park;Handong Lee;Junhyuk Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.1
    • /
    • pp.21-30
    • /
    • 2024
  • In this paper, we propose a method to build a sample dataset of the features of eight sensor-only facilities built as infrastructure for autonomous cooperative driving. The feature extracted from point cloud data acquired by LiDAR and build them into the sample dataset for recognizing the facilities. In order to build the dataset, eight sensor-only facilities with high-brightness reflector sheets and a sensor acquisition system were developed. To extract the features of facilities located within a certain measurement distance from the acquired point cloud data, a cylindrical projection method was applied to the extracted points after applying DBSCAN method for points and then a modified OTSU method for reflected intensity. Coordinates of 3D points, projected coordinates of 2D, and reflection intensity were set as the features of the facility, and the dataset was built along with labels. In order to check the effectiveness of the facility dataset built based on LiDAR data, a common CNN model was selected and tested after training, showing an accuracy of about 90% or more, confirming the possibility of facility recognition. Through continuous experiments, we will improve the feature extraction algorithm for building the proposed dataset and improve its performance, and develop a dedicated model for recognizing sensor-only facilities for autonomous cooperative driving.

Enhanced Reconstruction of Heavy Occluded Objects Using Estimation of Variance in Volumetric Integral Imaging (VII) (Volumetric 집적영상에서 분산 추정을 이용한 심하게 은폐된 물체의 향상된 복원)

  • Hwang, Yong-Seok;Kim, Eun-Soo
    • Korean Journal of Optics and Photonics
    • /
    • v.19 no.6
    • /
    • pp.389-393
    • /
    • 2008
  • Enhanced reconstruction of heavy occluded objects was represented using estimation of variance in computational integral imaging. The system is analyzed to extract information of enhanced reconstruction from an elemental images set. To obtain elemental images with enhanced resolution, low focus error, and large depth of focus, synthetic aperture integral imaging (SAII) utilizing a digital camera has been adopted. The focused areas of the reconstructed image are varied with the distance of the reconstruction plane. When an occluded object is occluded heavily, an occluded object can not be reconstructed by removing the occluding object. To obtain reconstruction of the occluded object by remedying the effect of heavy occlusion, the statistical technique has been adopted.

The Lens Aberration Correction Method for Laser Precision Machining in Machine Vision System (머신비전 시스템에서 레이저 정밀 가공을 위한 렌즈 수차 보정 방법)

  • Park, Yang-Jae
    • Journal of Digital Convergence
    • /
    • v.10 no.10
    • /
    • pp.301-306
    • /
    • 2012
  • We propose a method for accurate image acquisition in a machine vision system in the present study. The most important feature is required by the various lenses to implement real and of the same high quality image-forming optical role. The input of the machine vision system, however, is generated due to the aberration of the lens distortion. Transformation defines the relationship between the real-world coordinate system and the image coordinate system to solve these problems, a mapping function that matrix operations by calculating the distance between two coordinates to specify the exact location. Tolerance Focus Lens caused by the lens aberration correction processing to Galvanometer laser precision machining operations can be improved. Aberration of the aspheric lens has a two-dimensional shape of the curve, but the existing lens correction to linear time-consuming calibration methods by examining a large number of points the problem. How to apply the Bilinear interpolation is proposed in order to reduce the machining error that occurs due to the aberration of the lens processing equipment.

A Comparison of 3D Reconstruction through the Passive and Pseudo-Active Acquisition of Images (수동 및 반자동 영상획득을 통한 3차원 공간복원의 비교)

  • Jeona, MiJeong;Kim, DuBeom;Chai, YoungHo
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.3-10
    • /
    • 2016
  • In this paper, two reconstructed point cloud sets with the information of 3D features are analyzed. For a certain 3D reconstruction of the interior of a building, the first image set is taken from the sequential passive camera movement along the regular grid path and the second set is from the application of the laser scanning process. Matched key points over all images are obtained by the SIFT(Scale Invariant Feature Transformation) algorithm and are used for the registration of the point cloud data. The obtained results are point cloud number, average density of point cloud and the generating time for point cloud. Experimental results show the necessity of images from the additional sensors as well as the images from the camera for the more accurate 3D reconstruction of the interior of a building.

Digital Surface Model Generation using Aerial Lidar Data and Ground Control Point Acquisition (항공 라이다 데이터를 이용한 공간해상도별 수치표면모형 제작 및 지상기준점 획득 가능성 분석)

  • Kim Kam-Rae;Hwang Won-Soon;Lee Ho-Nam
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2006.04a
    • /
    • pp.485-490
    • /
    • 2006
  • In this study, the Digital Surface Models of various spatial resolutions were constructed using LIDAR point data on Digital Photogrammetric System. Then, the accuracies of each DSM's were evaluated using GPS surveying data. And also, observable features were classified and their accuracies were evaluated to verify the availability for Ground Control Point. On Socet Set, Digial Photogrametric System 5 DSM's of which spatial resolutions were 0.15m, 0.5m, 1.0m, 2.5m and 5.0m were constructed and the accuracies of eahc DSM's evaluated in RMSE. The RMSE's of each DSM's were 0.03m, 0.05m, 0.08m, 0.12m and 0,19m. The building feature was observable in DSM's of which spatial resolutions were 0.15m, 0.30m and 0.50m. On the contrary, it could hardly be observed in those of other spatial resolutions. In comparison with the digital map at the scale of 1:1,000, the DSM at the spatial resolution of 0.lim was shifted horizaltally by 0.6m-0.7m of RMSE in each X, Y direction. Therefore, GCP of which horizontal RMSE is better than 1m can be obtained from the DSM at the spatial resolution of 0.15m, of which vertical RMSE is 0.03m-0.19m as the RMSE of DSM. This point cannot be used in aerial triangulation of cartography but can be used for GCP in modeling of satellite image at the moderate resolution.

  • PDF