• Title/Summary/Keyword: Sparse Data Set

Search Result 47, Processing Time 0.021 seconds

Robust Real-Time Visual Odometry Estimation for 3D Scene Reconstruction (3차원 장면 복원을 위한 강건한 실시간 시각 주행 거리 측정)

  • Kim, Joo-Hee;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.4
    • /
    • pp.187-194
    • /
    • 2015
  • In this paper, we present an effective visual odometry estimation system to track the real-time pose of a camera moving in 3D space. In order to meet the real-time requirement as well as to make full use of rich information from color and depth images, our system adopts a feature-based sparse odometry estimation method. After matching features extracted from across image frames, it repeats both the additional inlier set refinement and the motion refinement to get more accurate estimate of camera odometry. Moreover, even when the remaining inlier set is not sufficient, our system computes the final odometry estimate in proportion to the size of the inlier set, which improves the tracking success rate greatly. Through experiments with TUM benchmark datasets and implementation of the 3D scene reconstruction application, we confirmed the high performance of the proposed visual odometry estimation method.

A Hill-Sliding Strategy for Initialization of Gaussian Clusters in the Multidimensional Space

  • Park, J.Kyoungyoon;Chen, Yung-H.;Simons, Daryl-B.;Miller, Lee-D.
    • Korean Journal of Remote Sensing
    • /
    • v.1 no.1
    • /
    • pp.5-27
    • /
    • 1985
  • A hill-sliding technique was devised to extract Gaussian clusters from the multivariate probability density estimates of sample data for the first step of iterative unsupervised classification. The underlying assumption in this approach was that each cluster possessed a unimodal normal distribution. The key idea was that a clustering function proposed could distinguish elements of a cluster under formation from the rest in the feature space. Initial clusters were extracted one by one according to the hill-sliding tactics. A dimensionless cluster compactness parameter was proposed as a universal measure of cluster goodness and used satisfactorily in test runs with Landsat multispectral scanner (MSS) data. The normalized divergence, defined by the cluster divergence divided by the entropy of the entire sample data, was utilized as a general separability measure between clusters. An overall clustering objective function was set forth in terms of cluster covariance matrices, from which the cluster compactness measure could be deduced. Minimal improvement of initial data partitioning was evaluated by this objective function in eliminating scattered sparse data points. The hill-sliding clustering technique developed herein has the potential applicability to decomposition of any multivariate mixture distribution into a number of unimodal distributions when an appropriate diatribution function to the data set is employed.

K-means Clustering using a Center Of Gravity for grid-based sample

  • Park, Hee-Chang;Lee, Sun-Myung
    • 한국데이터정보과학회:학술대회논문집
    • /
    • 2004.04a
    • /
    • pp.51-60
    • /
    • 2004
  • K-means clustering is an iterative algorithm in which items are moved among sets of clusters until the desired set is reached. K-means clustering has been widely used in many applications, such as market research, pattern analysis or recognition, image processing, etc. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters that we want, because it is more primitive, explorative. In this paper we propose a new method of k-means clustering using a center of gravity for grid-based sample. It is more fast than any traditional clustering method and maintains its accuracy.

  • PDF

Multiview-based Spectral Weighted and Low-Rank for Row-sparsity Hyperspectral Unmixing

  • Zhang, Shuaiyang;Hua, Wenshen;Liu, Jie;Li, Gang;Wang, Qianghui
    • Current Optics and Photonics
    • /
    • v.5 no.4
    • /
    • pp.431-443
    • /
    • 2021
  • Sparse unmixing has been proven to be an effective method for hyperspectral unmixing. Hyperspectral images contain rich spectral and spatial information. The means to make full use of spectral information, spatial information, and enhanced sparsity constraints are the main research directions to improve the accuracy of sparse unmixing. However, many algorithms only focus on one or two of these factors, because it is difficult to construct an unmixing model that considers all three factors. To address this issue, a novel algorithm called multiview-based spectral weighted and low-rank row-sparsity unmixing is proposed. A multiview data set is generated through spectral partitioning, and then spectral weighting is imposed on it to exploit the abundant spectral information. The row-sparsity approach, which controls the sparsity by the l2,0 norm, outperforms the single-sparsity approach in many scenarios. Many algorithms use convex relaxation methods to solve the l2,0 norm to avoid the NP-hard problem, but this will reduce sparsity and unmixing accuracy. In this paper, a row-hard-threshold function is introduced to solve the l2,0 norm directly, which guarantees the sparsity of the results. The high spatial correlation of hyperspectral images is associated with low column rank; therefore, the low-rank constraint is adopted to utilize spatial information. Experiments with simulated and real data prove that the proposed algorithm can obtain better unmixing results.

Cluster Feature Selection using Entropy Weighting and SVD (엔트로피 가중치 및 SVD를 이용한 군집 특징 선택)

  • Lee, Young-Seok;Lee, Soo-Won
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.4
    • /
    • pp.248-257
    • /
    • 2002
  • Clustering is a method for grouping objects with similar properties into a same cluster. SVD(Singular Value Decomposition) is known as an efficient preprocessing method for clustering because of dimension reduction and noise elimination for a high dimensional and sparse data set like E-Commerce data set. However, it is hard to evaluate the worth of original attributes because of information loss of a converted data set by SVD. This research proposes a cluster feature selection method, called ENTROPY-SVD, to find important attributes for each cluster based on entropy weighting and SVD. Using SVD, one can take advantage of the latent structures in the association of attributes with similar objects and, using entropy weighting one can find highly dense attributes for each cluster. This paper also proposes a model-based collaborative filtering recommendation system with ENTROPY-SVD, called CFS-CF and evaluates its efficiency and utilization.

K-means clustering using a center of gravity for grid-based sample (그리드 기반 표본의 무게중심을 이용한 케이-평균군집화)

  • Lee, Sun-Myung;Park, Hee-Chang
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.1
    • /
    • pp.121-128
    • /
    • 2010
  • K-means clustering is an iterative algorithm in which items are moved among sets of clusters until the desired set is reached. K-means clustering has been widely used in many applications, such as market research, pattern analysis or recognition, image processing, etc. It can identify dense and sparse regions among data attributes or object attributes. But k-means algorithm requires many hours to get k clusters that we want, because it is more primitive, explorative. In this paper we propose a new method of k-means clustering using a center of gravity for grid-based sample. It is more fast than any traditional clustering method and maintains its accuracy.

Multi-Description Image Compression Coding Algorithm Based on Depth Learning

  • Yong Zhang;Guoteng Hui;Lei Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.2
    • /
    • pp.232-239
    • /
    • 2023
  • Aiming at the poor compression quality of traditional image compression coding (ICC) algorithm, a multi-description ICC algorithm based on depth learning is put forward in this study. In this study, first an image compression algorithm was designed based on multi-description coding theory. Image compression samples were collected, and the measurement matrix was calculated. Then, it processed the multi-description ICC sample set by using the convolutional self-coding neural system in depth learning. Compressing the wavelet coefficients after coding and synthesizing the multi-description image band sparse matrix obtained the multi-description ICC sequence. Averaging the multi-description image coding data in accordance with the effective single point's position could finally realize the compression coding of multi-description images. According to experimental results, the designed algorithm consumes less time for image compression, and exhibits better image compression quality and better image reconstruction effect.

ACA: Automatic search strategy for radioactive source

  • Jianwen Huo;Xulin Hu;Junling Wang;Li Hu
    • Nuclear Engineering and Technology
    • /
    • v.55 no.8
    • /
    • pp.3030-3038
    • /
    • 2023
  • Nowadays, mobile robots have been used to search for uncontrolled radioactive source in indoor environments to avoid radiation exposure for technicians. However, in the indoor environments, especially in the presence of obstacles, how to make the robots with limited sensing capabilities automatically search for the radioactive source remains a major challenge. Also, the source search efficiency of robots needs to be further improved to meet practical scenarios such as limited exploration time. This paper proposes an automatic source search strategy, abbreviated as ACA: the location of source is estimated by a convolutional neural network (CNN), and the path is planned by the A-star algorithm. First, the search area is represented as an occupancy grid map. Then, the radiation dose distribution of the radioactive source in the occupancy grid map is obtained by Monte Carlo (MC) method simulation, and multiple sets of radiation data are collected through the eight neighborhood self-avoiding random walk (ENSAW) algorithm as the radiation data set. Further, the radiation data set is fed into the designed CNN architecture to train the network model in advance. When the searcher enters the search area where the radioactive source exists, the location of source is estimated by the network model and the search path is planned by the A-star algorithm, and this process is iterated continuously until the searcher reaches the location of radioactive source. The experimental results show that the average number of radiometric measurements and the average number of moving steps of the ACA algorithm are only 2.1% and 33.2% of those of the gradient search (GS) algorithm in the indoor environment without obstacles. In the indoor environment shielded by concrete walls, the GS algorithm fails to search for the source, while the ACA algorithm successfully searches for the source with fewer moving steps and sparse radiometric data.

High Data Rate Ultra Wideband Space Time Coded OFDM (고속 전송률 UWB 시공간 부호화 OFDM)

  • Lee Kwang-Jae;Chen Hsiao-Hwa;Lee Moon-Ho
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.7 s.349
    • /
    • pp.132-142
    • /
    • 2006
  • In this paper, we present a candidate high data rate space time coded OFDM system for short range personal networking. The system transmits the complex space time coded signals with a hybrid orthogonal frequency division multiplexing (OFDM) based on ultra wideband (UWB) pulses. The transmitted signals are sparse pulse trains modulated by a frequency selected from a properly designed set of frequencies. Additionally, a widely linear (WL) receive filter and a space time frequency transmission are designed by using two simple parallel linear detectors. To overcome the deeply fade in the propagation system, a beamforming combined with space time block codes also 따 e briefly discussed.

Structural identification of Humber Bridge for performance prognosis

  • Rahbari, R.;Niu, J.;Brownjohn, J.M.W.;Koo, K.Y.
    • Smart Structures and Systems
    • /
    • v.15 no.3
    • /
    • pp.665-682
    • /
    • 2015
  • Structural identification or St-Id is 'the parametric correlation of structural response characteristics predicted by a mathematical model with analogous characteristics derived from experimental measurements'. This paper describes a St-Id exercise on Humber Bridge that adopted a novel two-stage approach to first calibrate and then validate a mathematical model. This model was then used to predict effects of wind and temperature loads on global static deformation that would be practically impossible to observe. The first stage of the process was an ambient vibration survey in 2008 that used operational modal analysis to estimate a set of modes classified as vertical, torsional or lateral. In the more recent second stage a finite element model (FEM) was developed with an appropriate level of refinement to provide a corresponding set of modal properties. A series of manual adjustments to modal parameters such as cable tension and bearing stiffness resulted in a FEM that produced excellent correspondence for vertical and torsional modes, along with correspondence for the lower frequency lateral modes. In the third stage traffic, wind and temperature data along with deformation measurements from a sparse structural health monitoring system installed in 2011 were compared with equivalent predictions from the partially validated FEM. The match of static response between FEM and SHM data proved good enough for the FEM to be used to predict the un-measurable global deformed shape of the bridge due to vehicle and temperature effects but the FEM had limited capability to reproduce static effects of wind. In addition the FEM was used to show internal forces due to a heavy vehicle to to estimate the worst-case bearing movements under extreme combinations of wind, traffic and temperature loads. The paper shows that in this case, but with limitations, such a two-stage FEM calibration/validation process can be an effective tool for performance prognosis.