• Title/Summary/Keyword: vector computer

Search Result 2,006, Processing Time 0.026 seconds

Modified Component Mode Synthesis Method Using Ritz Vectors (Ritz 벡터를 이용한 수정 분할구조해석법)

  • 이인원;김동옥
    • Journal of KSNVE
    • /
    • v.3 no.1
    • /
    • pp.77-82
    • /
    • 1993
  • In general, the dynamic analysis with FEM(Finite Element Method) of large structures requires large computer memory space and long computational time. For the purpose of economical dynamic analysis of large structures, most of engineers want to use an efficient solution algorithm. This paper reports the modified CMS(Component Mode Synthesis) method which uses more efficient algorithm than the classical CMS method. In this paper, it is shown that Ritz vector sets can play the role of normal mode vector sets of substurctures in the CMS algorithm. The modified CMS method has good convergence performance compared with that of the classical CMS method.

  • PDF

Bayes Estimation in a Hierarchical Linear Model

  • Park, Kuey-Chung;Chang, In-Hong;Kim, Byung-Hwee
    • Journal of the Korean Statistical Society
    • /
    • v.27 no.1
    • /
    • pp.1-10
    • /
    • 1998
  • In the problem of estimating a vector of unknown regression coefficients under the sum of squared error losses in a hierarchical linear model, we propose the hierarchical Bayes estimator of a vector of unknown regression coefficients in a hierarchical linear model, and then prove the admissibility of this estimator using Blyth's (196\51) method.

  • PDF

A compensated algorithm for dirction-of-arrival estimationof the linear array with faulty sensors (결함센서를 갖는 선형 어레이의 방향 추정을 위한 보정 알고리듬)

  • 김기만;윤대희
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.7
    • /
    • pp.1574-1578
    • /
    • 1997
  • In this paper, some problems that occur from faulty elements in a direction finding system composed of the linear array are studied and the method which improves the performance is proposed. The faulty element means the sensor that has no output or highly reduced gain than other normal sensors. In the presence method, the ocrrecting vector is calculated by maximizing the spatial spectrum subject to a constraint. The compensated spatial spectrum is obtained by this vector. The computer simulations have been performance to study the performance of the proposed method.

  • PDF

A stereo matching method using minimum feature vector distance and disparity map (최소 특징 벡터 거리와 변이지도를 이용한 스테레오 정합 기법)

  • Ye, Chul-Soo
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.403-404
    • /
    • 2006
  • In this paper, we proposed muli-dimensional feature vector matching method combined with disparity smoothness constraint. The smoothness constraint was calculated using the difference between disparity of center pixel and those of 4-neighbor pixels. By applying proposed algorithm to IKONOS satellite stereo imagery, we obtained robust stereo matching result in urban areas.

  • PDF

A Character Recognition System for Gerber File through Modularized Neural Network (모듈화된 신경회로망을 이용한 거버 문자 인식 시스템 구현)

  • Oh, Hye-Won;Park, Tae-Hyong
    • Proceedings of the KIEE Conference
    • /
    • 2003.07d
    • /
    • pp.2549-2551
    • /
    • 2003
  • We propose character recognition system for Gerber files. The Gerber file is the vector-formatted drawing file for PCB manufacturing. To consider the special vector format and rotated characters, we develop segmentation and feature extraction method. The modularized neural network is then applied to the recognition algorithm. Finally, comparative simulation results are presented to verify the usefulness of the proposed method.

  • PDF

Improvement of the Modified James-Stein Estimator with Shrinkage Point and Constraints on the Norm

  • Kim, Jae Hyun;Baek, Hoh Yoo
    • Journal of Integrative Natural Science
    • /
    • v.6 no.4
    • /
    • pp.251-255
    • /
    • 2013
  • For the mean vector of a p-variate normal distribution ($p{\geq}4$), the optimal estimation within the class of modified James-Stein type decision rules under the quadratic loss is given when the underlying distribution is that of a variance mixture of normals and when the norm ${\parallel}{\theta}-\bar{\theta}1{\parallel}$ it known.

Speaker verification system combining attention-long short term memory based speaker embedding and I-vector in far-field and noisy environments (Attention-long short term memory 기반의 화자 임베딩과 I-vector를 결합한 원거리 및 잡음 환경에서의 화자 검증 알고리즘)

  • Bae, Ara;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.2
    • /
    • pp.137-142
    • /
    • 2020
  • Many studies based on I-vector have been conducted in a variety of environments, from text-dependent short-utterance to text-independent long-utterance. In this paper, we propose a speaker verification system employing a combination of I-vector with Probabilistic Linear Discriminant Analysis (PLDA) and speaker embedding of Long Short Term Memory (LSTM) with attention mechanism in far-field and noisy environments. The LSTM model's Equal Error Rate (EER) is 15.52 % and the Attention-LSTM model is 8.46 %, improving by 7.06 %. We show that the proposed method solves the problem of the existing extraction process which defines embedding as a heuristic. The EER of the I-vector/PLDA without combining is 6.18 % that shows the best performance. And combined with attention-LSTM based embedding is 2.57 % that is 3.61 % less than the baseline system, and which improves performance by 58.41 %.

A Design of a Robust Vector Quantizer for Wavelet Transformed Images (웨이브렛벤환 영상 부호화용 범용 벡터양자화기의 설계)

  • Do, Jae-Su;Cho, Young-Suk
    • Convergence Security Journal
    • /
    • v.6 no.4
    • /
    • pp.83-90
    • /
    • 2006
  • In this paper, we propose a new design method for a robust vector quantizer that is independent of the statistical characteristics of input images in the wavelet transformed image coding. The conventional vector quantizers have failed to get quality coding results because of the different statistical properties between the image to be quantized and the training sequence for a codebook of the vector quantizer. Therefore, in order to solve this problem, we used a pseudo image as a training sequence to generate a codebook of the vector quantizer; the pseudo image is created by adding correlation coefficient and edge components to uniformly distributed random numbers. We will clearly define the problem of the conventional vector quantizers, which use real images as a training sequence to generate a codebook used, by comparing the conventional methods with the proposed through computer simulation. Also, we will show the proposed vector quantizer yields better coding results.

  • PDF

Multi-Vector Document Embedding Using Semantic Decomposition of Complex Documents (복합 문서의 의미적 분해를 통한 다중 벡터 문서 임베딩 방법론)

  • Park, Jongin;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.19-41
    • /
    • 2019
  • According to the rapidly increasing demand for text data analysis, research and investment in text mining are being actively conducted not only in academia but also in various industries. Text mining is generally conducted in two steps. In the first step, the text of the collected document is tokenized and structured to convert the original document into a computer-readable form. In the second step, tasks such as document classification, clustering, and topic modeling are conducted according to the purpose of analysis. Until recently, text mining-related studies have been focused on the application of the second steps, such as document classification, clustering, and topic modeling. However, with the discovery that the text structuring process substantially influences the quality of the analysis results, various embedding methods have actively been studied to improve the quality of analysis results by preserving the meaning of words and documents in the process of representing text data as vectors. Unlike structured data, which can be directly applied to a variety of operations and traditional analysis techniques, Unstructured text should be preceded by a structuring task that transforms the original document into a form that the computer can understand before analysis. It is called "Embedding" that arbitrary objects are mapped to a specific dimension space while maintaining algebraic properties for structuring the text data. Recently, attempts have been made to embed not only words but also sentences, paragraphs, and entire documents in various aspects. Particularly, with the demand for analysis of document embedding increases rapidly, many algorithms have been developed to support it. Among them, doc2Vec which extends word2Vec and embeds each document into one vector is most widely used. However, the traditional document embedding method represented by doc2Vec generates a vector for each document using the whole corpus included in the document. This causes a limit that the document vector is affected by not only core words but also miscellaneous words. Additionally, the traditional document embedding schemes usually map each document into a single corresponding vector. Therefore, it is difficult to represent a complex document with multiple subjects into a single vector accurately using the traditional approach. In this paper, we propose a new multi-vector document embedding method to overcome these limitations of the traditional document embedding methods. This study targets documents that explicitly separate body content and keywords. In the case of a document without keywords, this method can be applied after extract keywords through various analysis methods. However, since this is not the core subject of the proposed method, we introduce the process of applying the proposed method to documents that predefine keywords in the text. The proposed method consists of (1) Parsing, (2) Word Embedding, (3) Keyword Vector Extraction, (4) Keyword Clustering, and (5) Multiple-Vector Generation. The specific process is as follows. all text in a document is tokenized and each token is represented as a vector having N-dimensional real value through word embedding. After that, to overcome the limitations of the traditional document embedding method that is affected by not only the core word but also the miscellaneous words, vectors corresponding to the keywords of each document are extracted and make up sets of keyword vector for each document. Next, clustering is conducted on a set of keywords for each document to identify multiple subjects included in the document. Finally, a Multi-vector is generated from vectors of keywords constituting each cluster. The experiments for 3.147 academic papers revealed that the single vector-based traditional approach cannot properly map complex documents because of interference among subjects in each vector. With the proposed multi-vector based method, we ascertained that complex documents can be vectorized more accurately by eliminating the interference among subjects.

Comparison between the Vector and Tensor Approaches for the 3-dimensional Electro-optical Simulations of Liquid Crystal Displays (액정의 3차원적 전기광학 시물레이션에서 vector 와 tensor 모델링방법의 비교)

  • Jung, Sung-Min;Park, Woo-Sang
    • Proceedings of the KIEE Conference
    • /
    • 2001.11a
    • /
    • pp.32-34
    • /
    • 2001
  • 본 연구에서는 액정디스플레이의 전기광학특성 분석에 사용되고 있는 두 가지 모델링 방법인 벡터접근법과 텐서접근법에서, 각각의 지배방정식을 시뮬레이션 함으로써 두 접근법의 결과를 비교 및 분석하였다. 이를 위하여 1차원적 시뮬레이션과 함께 측면전장효과 및 disclination line등의 영향을 모두 고려하기 위해 3차원적 액정분자배열 분포를 동일한 조건에 대하여 시뮬레이션 하였다. 두 접근방법에 대한 동적 특성은 단위화소 내의 국부 점에서 많은 차이를 보임을 확인하였으며, 이에 따라 네마틱액정의 방향자에 대한 네마틱 대칭성이 고려된 텐서접근법이 물리적으로 의미가 있으며 실제 현상도 명확하게 설명할 수 있음을 확인하였다.

  • PDF