• Title/Summary/Keyword: Normal Vector Computation

Search Result 19, Processing Time 0.022 seconds

GPU-Based ECC Decode Unit for Efficient Massive Data Reception Acceleration

  • Kwon, Jisu;Seok, Moon Gi;Park, Daejin
    • Journal of Information Processing Systems
    • /
    • v.16 no.6
    • /
    • pp.1359-1371
    • /
    • 2020
  • In transmitting and receiving such a large amount of data, reliable data communication is crucial for normal operation of a device and to prevent abnormal operations caused by errors. Therefore, in this paper, it is assumed that an error correction code (ECC) that can detect and correct errors by itself is used in an environment where massive data is sequentially received. Because an embedded system has limited resources, such as a low-performance processor or a small memory, it requires efficient operation of applications. In this paper, we propose using an accelerated ECC-decoding technique with a graphics processing unit (GPU) built into the embedded system when receiving a large amount of data. In the matrix-vector multiplication that forms the Hamming code used as a function of the ECC operation, the matrix is expressed in compressed sparse row (CSR) format, and a sparse matrix-vector product is used. The multiplication operation is performed in the kernel of the GPU, and we also accelerate the Hamming code computation so that the ECC operation can be performed in parallel. The proposed technique is implemented with CUDA on a GPU-embedded target board, NVIDIA Jetson TX2, and compared with execution time of the CPU.

Restricted maximum likelihood estimation of a censored random effects panel regression model

  • Lee, Minah;Lee, Seung-Chun
    • Communications for Statistical Applications and Methods
    • /
    • v.26 no.4
    • /
    • pp.371-383
    • /
    • 2019
  • Panel data sets have been developed in various areas, and many recent studies have analyzed panel, or longitudinal data sets. Maximum likelihood (ML) may be the most common statistical method for analyzing panel data models; however, the inference based on the ML estimate will have an inflated Type I error because the ML method tends to give a downwardly biased estimate of variance components when the sample size is small. The under estimation could be severe when data is incomplete. This paper proposes the restricted maximum likelihood (REML) method for a random effects panel data model with a censored dependent variable. Note that the likelihood function of the model is complex in that it includes a multidimensional integral. Many authors proposed to use integral approximation methods for the computation of likelihood function; however, it is well known that integral approximation methods are inadequate for high dimensional integrals in practice. This paper introduces to use the moments of truncated multivariate normal random vector for the calculation of multidimensional integral. In addition, a proper asymptotic standard error of REML estimate is given.

Motion Direction Oriented Fast Block Matching Algorithm (움직임 방향 지향적인 고속 블록정합 알고리즘)

  • Oh, Jeong-Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.15 no.9
    • /
    • pp.2007-2012
    • /
    • 2011
  • To reduce huge computation in the block matching, this paper proposes a fast block matching algorithm which limits search points in the search area. On the basis of two facts that most motion vectors are located in central part of search area and matching error is monotonic decreasing toward the best similar block, the proposed algorithm moves a matching pattern between steps by the one pixel, predicts the motion direction for the best similar block from similar blocks decided in previous steps, and limits movements of search points to ${\pm}45^{\circ}C$ on it. As a result, it could remove the needless search points and reduce the block matching computation. In comparison with the conventional similar algorithms, the proposed algorithm caused the trivial image degradation in images with fast motion but kept the equivalent image quality in images with normal motion, and it, meanwhile, reduced from about 20% to over 67% of the their block matching computation.

Study on the direct approach to reinitialization in using level set method for simulating incompressible two-phase flows (비압축성 2 상유동의 모사를 위한 level set 방법에서의 reinitialization 직접 접근법에 관한 연구)

  • Cho, Myung-H.;Choi, Hyoung-G.;Yoo, Jung-Y.
    • 한국전산유체공학회:학술대회논문집
    • /
    • 2008.03b
    • /
    • pp.568-571
    • /
    • 2008
  • The computation of moving interface by the level set method typically requires reinitializations of level set function. An inaccurate estimation of level set function ${\phi}$ results in incorrect free-surface capturing and thus errors such as mass gain/loss. Therefore, accurate and robust reinitialization process is essential to the free-surface flows. In the present paper, we pursue further development of the reinitialization process, which evaluates directly level set function ${\phi}$ using a normal vector in the interface without solving the re-distancing equation of hyperbolic type. The Taylor-Galerkin approximation and P1P1splitting FEM are adopted to discretize advection equation of the level set function and the Navier-Stokes equation, respectively. Advection equation of free surface and re-initialization process are validated with benchmark problems, i.e., a broken dam flow and time-reversed single vortex flow. The simulation results are in good agreement with the existing results.

  • PDF

Study on the Solution of Reinitialization Equation for Level Set Method in the Simulation of Incompressible Two-Phase Flows (비압축성 2 상유동의 모사를 위한 Level Set 방법의 Reinitialization 방정식의 해법에 관한 연구)

  • Cho, Myung-Hwan;Choi, Hyoung-Gwon;Yoo, Jung-Yul
    • Transactions of the Korean Society of Mechanical Engineers B
    • /
    • v.32 no.10
    • /
    • pp.754-760
    • /
    • 2008
  • Computation of moving interface by the level set method typically requires the reinitialization of level set function. An inaccurate estimation of level set function $\phi$ results in incorrect free-surface capturing and thus errors such as mass gain/loss. Therefore, an accurate and robust reinitialization process is essential to the simulation of free-surface flows. In the present paper, we pursue further development of the reinitialization process, which evaluates level set function directly using a normal vector on the interface without solving there-distancing equation of hyperbolic type. The Taylor-Galerkin approximation and P1P1 splitting/SUPG (Streamline Upwind Petrov-Galerkin) FEM are adopted to discretize advection equation of the level set function and the incompressible Navier-Stokes equation, respectively. Advection equation and re-initialization process of free surface capturing are validated with benchmark problems, i.e., a broken dam flow and timereversed single vortex flow. The simulation results are in good agreement with the existing results.

Development of an Efficient Algorithm for the Minimum Distance Calculation between two Polyhedra in Three-Dimensional Space (삼차원 공간에서 두 다면체 사이의 최소거리 계산을 위한 효율적인 알고리즘의 개발)

  • 오재윤;김기호
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.130-136
    • /
    • 1998
  • This paper develops an efficient algorithm for the minimum distance calculation between two general polyhedra(convex and/or concave) in three-dimensional space. The polyhedra approximate objects using flat polygons which composed of more than three vertices. The algorithm developed in this paper basically computes minimum distance between two polygons(one polygon per object) and finds a set of two polygons which makes a global minimum distance. The advantage of the algorithm is that the global minimum distance can be computed in any cases. But the big disadvantage is that the minimum distance computing time is rapidly increased with the number of polygons which used to approximate an object. This paper develops a method to eliminate sets of two polygons which have no possibility of minimum distance occurrence, and an efficient algorithm to compute a minimum distance between two polygons in order to compensate the inherent disadvantage of the algorithm. The correctness of the algorithm is verified not only comparing analytically calculated exact minimum distance with one calculated using the developed algorithm but also watching a line which connects two points making a global minimum distance of a convex object and/or a concave object. The algorithm efficiently finds minimum distance between two convex objects made of 224 polygons respectively with a computation time of about 0.1 second.

  • PDF

Lossless Compression for Hyperspectral Images based on Adaptive Band Selection and Adaptive Predictor Selection

  • Zhu, Fuquan;Wang, Huajun;Yang, Liping;Li, Changguo;Wang, Sen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3295-3311
    • /
    • 2020
  • With the wide application of hyperspectral images, it becomes more and more important to compress hyperspectral images. Conventional recursive least squares (CRLS) algorithm has great potentiality in lossless compression for hyperspectral images. The prediction accuracy of CRLS is closely related to the correlations between the reference bands and the current band, and the similarity between pixels in prediction context. According to this characteristic, we present an improved CRLS with adaptive band selection and adaptive predictor selection (CRLS-ABS-APS). Firstly, a spectral vector correlation coefficient-based k-means clustering algorithm is employed to generate clustering map. Afterwards, an adaptive band selection strategy based on inter-spectral correlation coefficient is adopted to select the reference bands for each band. Then, an adaptive predictor selection strategy based on clustering map is adopted to select the optimal CRLS predictor for each pixel. In addition, a double snake scan mode is used to further improve the similarity of prediction context, and a recursive average estimation method is used to accelerate the local average calculation. Finally, the prediction residuals are entropy encoded by arithmetic encoder. Experiments on the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) 2006 data set show that the CRLS-ABS-APS achieves average bit rates of 3.28 bpp, 5.55 bpp and 2.39 bpp on the three subsets, respectively. The results indicate that the CRLS-ABS-APS effectively improves the compression effect with lower computation complexity, and outperforms to the current state-of-the-art methods.

ECG-based Biometric Authentication Using Random Forest (랜덤 포레스트를 이용한 심전도 기반 생체 인증)

  • Kim, JeongKyun;Lee, Kang Bok;Hong, Sang Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.6
    • /
    • pp.100-105
    • /
    • 2017
  • This work presents an ECG biometric recognition system for the purpose of biometric authentication. ECG biometric approaches are divided into two major categories, fiducial-based and non-fiducial-based methods. This paper proposes a new non-fiducial framework using discrete cosine transform and a Random Forest classifier. When using DCT, most of the signal information tends to be concentrated in a few low-frequency components. In order to apply feature vector of Random Forest, DCT feature vectors of ECG heartbeats are constructed by using the first 40 DCT coefficients. RF is based on the computation of a large number of decision trees. It is relatively fast, robust and inherently suitable for multi-class problems. Furthermore, it trade-off threshold between admission and rejection of ID inside RF classifier. As a result, proposed method offers 99.9% recognition rates when tested on MIT-BIH NSRDB.

Landslide susceptibility assessment using feature selection-based machine learning models

  • Liu, Lei-Lei;Yang, Can;Wang, Xiao-Mi
    • Geomechanics and Engineering
    • /
    • v.25 no.1
    • /
    • pp.1-16
    • /
    • 2021
  • Machine learning models have been widely used for landslide susceptibility assessment (LSA) in recent years. The large number of inputs or conditioning factors for these models, however, can reduce the computation efficiency and increase the difficulty in collecting data. Feature selection is a good tool to address this problem by selecting the most important features among all factors to reduce the size of the input variables. However, two important questions need to be solved: (1) how do feature selection methods affect the performance of machine learning models? and (2) which feature selection method is the most suitable for a given machine learning model? This paper aims to address these two questions by comparing the predictive performance of 13 feature selection-based machine learning (FS-ML) models and 5 ordinary machine learning models on LSA. First, five commonly used machine learning models (i.e., logistic regression, support vector machine, artificial neural network, Gaussian process and random forest) and six typical feature selection methods in the literature are adopted to constitute the proposed models. Then, fifteen conditioning factors are chosen as input variables and 1,017 landslides are used as recorded data. Next, feature selection methods are used to obtain the importance of the conditioning factors to create feature subsets, based on which 13 FS-ML models are constructed. For each of the machine learning models, a best optimized FS-ML model is selected according to the area under curve value. Finally, five optimal FS-ML models are obtained and applied to the LSA of the studied area. The predictive abilities of the FS-ML models on LSA are verified and compared through the receive operating characteristic curve and statistical indicators such as sensitivity, specificity and accuracy. The results showed that different feature selection methods have different effects on the performance of LSA machine learning models. FS-ML models generally outperform the ordinary machine learning models. The best FS-ML model is the recursive feature elimination (RFE) optimized RF, and RFE is an optimal method for feature selection.