• Title/Summary/Keyword: 벡터 근사

Search Result 176, Processing Time 0.033 seconds

Deisgn of adaptive array antenna for tracking the source of maximum power and its application to CDMA mobile communication (최대 고유치 문제의 해를 이용한 적응 안테나 어레이와 CDMA 이동통신에의 응용)

  • 오정호;윤동운;최승원
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.11
    • /
    • pp.2594-2603
    • /
    • 1997
  • A novel method of adaptive beam forming is presented in this paper. The proposed technique provides for a suboptimal beam pattern that increases the Signal to Noise/Interference Ratio (SNR/SIR), thus, eventually increases the capacity of the communication channel, under an assumption that the desired signal is dominant compared to each component of interferences at the receiver, which is precoditionally achieved in Code Division Multiple Access (CDMA) mobile communications by the chip correlator. The main advantages of the new technique are:(1)The procedure requires neither reference signals nor training period, (2)The signal interchoerency does not affect the performance or complexity of the entire procedure, (3)The number of antennas does not have to be greater than that of the signals of distinct arrival angles, (4)The entire procedure is iterative such that a new suboptimal beam pattern be generated upon the arrival of each new data of which the arrival angle keeps changing due tot he mobility of the signal source, (5)The total amount of computation is tremendously reduced compared to that of most conventional beam forming techniques such that the suboptimal beam pattern be produced at vevery snapshot on a real-time basis. The total computational load for generating a new set of weitht including the update of an N-by-N(N is the number of antenna elements) autocovariance matrix is $0(3N^2 + 12N)$. It can further be reduced down to O(11N) by approximating the matrix with the instantaneous signal vector.

  • PDF

Experimental Validation of Isogeometric Optimal Design (아이소-지오메트릭 형상 최적설계의 실험적 검증)

  • Choi, Myung-Jin;Yoon, Min-Ho;Cho, Seonho
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.27 no.5
    • /
    • pp.345-352
    • /
    • 2014
  • In this paper, the CAD data for the optimal shape design obtained by isogeometric shape optimization is directly used to fabricate the specimen by using 3D printer for the experimental validation. In a conventional finite element method, the geometric approximation inherent in the mesh leads to the accuracy issue in response analysis and design sensitivity analysis. Furthermore, in the finite element based shape optimization, subsequent communication with CAD description is required in the design optimization process, which results in the loss of optimal design information during the communication. Isogeometric analysis method employs the same NURBS basis functions and control points used in CAD systems, which enables to use exact geometrical properties like normal vector and curvature information in the response analysis and design sensitivity analysis procedure. Also, it vastly simplify the design modification of complex geometries without communicating with the CAD description of geometry during design optimization process. Therefore, the information of optimal design and material volume is exactly reflected to fabricate the specimen for experimental validation. Through the design optimization examples of elasticity problem, it is experimentally shown that the optimal design has higher stiffness than the initial design. Also, the experimental results match very well with the numerical results. Using a non-contact optical 3D deformation measuring system for strain distribution, it is shown that the stress concentration is significantly alleviated in the optimal design compared with the initial design.

Modeling of Magnetotelluric Data Based on Finite Element Method: Calculation of Auxiliary Fields (유한요소법을 이용한 MT 탐사 자료의 모델링: 보조장 계산의 고찰)

  • Nam, Myung-Jin;Han, Nu-Ree;Kim, Hee-Joon;Song, Yoon-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.14 no.2
    • /
    • pp.164-175
    • /
    • 2011
  • Using natural electromagnetic (EM) fields at low frequencies, magnetotelluric (MT) surveys can investigate conductivity structures of the deep subsurface and thus are used to explore geothermal energy resources and investigate proper sites for not only geological $CO_2$ sequestration but also enhanced geothermal system (EGS). Moreover, marine MT data can be used for better interpretation of marine controlled-source EM data. In the interpretation of MT data, MT modeling schemes are important. This study improves a three dimensional (3D) MT modeling algorithm which uses edge finite elements. The algorithm computes magnetic fields by solving an integral form of Faraday's law of induction based on a finite difference (FD) strategy. However, the FD strategy limits the algorithm in computing vertical magnetic fields for a topographic model. The improved algorithm solves the differential form of Faraday's law of induction by making derivatives of electric fields, which are represented as a sum of basis functions multiplied by corresponding weightings. In numerical tests, vertical magnetic fields for topographic models using the improved algorithm overcome the limitation of the old algorithm. This study recomputes induction vectors and tippers for a 3D hill and valley model which were used for computation of the responses using the old algorithm.

Fast GPU Implementation for the Solution of Tridiagonal Matrix Systems (삼중대각행렬 시스템 풀이의 빠른 GPU 구현)

  • Kim, Yong-Hee;Lee, Sung-Kee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.32 no.11_12
    • /
    • pp.692-704
    • /
    • 2005
  • With the improvement of computer hardware, GPUs(Graphics Processor Units) have tremendous memory bandwidth and computation power. This leads GPUs to use in general purpose computation. Especially, GPU implementation of compute-intensive physics based simulations is actively studied. In the solution of differential equations which are base of physics simulations, tridiagonal matrix systems occur repeatedly by finite-difference approximation. From the point of view of physics based simulations, fast solution of tridiagonal matrix system is important research field. We propose a fast GPU implementation for the solution of tridiagonal matrix systems. In this paper, we implement the cyclic reduction(also known as odd-even reduction) algorithm which is a popular choice for vector processors. We obtained a considerable performance improvement for solving tridiagonal matrix systems over Thomas method and conjugate gradient method. Thomas method is well known as a method for solving tridiagonal matrix systems on CPU and conjugate gradient method has shown good results on GPU. We experimented our proposed method by applying it to heat conduction, advection-diffusion, and shallow water simulations. The results of these simulations have shown a remarkable performance of over 35 frame-per-second on the 1024x1024 grid.

Multiple Cause Model-based Topic Extraction and Semantic Kernel Construction from Text Documents (다중요인모델에 기반한 텍스트 문서에서의 토픽 추출 및 의미 커널 구축)

  • 장정호;장병탁
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.5
    • /
    • pp.595-604
    • /
    • 2004
  • Automatic analysis of concepts or semantic relations from text documents enables not only an efficient acquisition of relevant information, but also a comparison of documents in the concept level. We present a multiple cause model-based approach to text analysis, where latent topics are automatically extracted from document sets and similarity between documents is measured by semantic kernels constructed from the extracted topics. In our approach, a document is assumed to be generated by various combinations of underlying topics. A topic is defined by a set of words that are related to the same topic or cooccur frequently within a document. In a network representing a multiple-cause model, each topic is identified by a group of words having high connection weights from a latent node. In order to facilitate teaming and inferences in multiple-cause models, some approximation methods are required and we utilize an approximation by Helmholtz machines. In an experiment on TDT-2 data set, we extract sets of meaningful words where each set contains some theme-specific terms. Using semantic kernels constructed from latent topics extracted by multiple cause models, we also achieve significant improvements over the basic vector space model in terms of retrieval effectiveness.

Domain-Specific Terminology Mapping Methodology Using Supervised Autoencoders (지도학습 오토인코더를 이용한 전문어의 범용어 공간 매핑 방법론)

  • Byung Ho Yoon;Junwoo Kim;Namgyu Kim
    • Information Systems Review
    • /
    • v.25 no.1
    • /
    • pp.93-110
    • /
    • 2023
  • Recently, attempts have been made to convert unstructured text into vectors and to analyze vast amounts of natural language for various purposes. In particular, the demand for analyzing texts in specialized domains is rapidly increasing. Therefore, studies are being conducted to analyze specialized and general-purpose documents simultaneously. To analyze specific terms with general terms, it is necessary to align the embedding space of the specific terms with the embedding space of the general terms. So far, attempts have been made to align the embedding of specific terms into the embedding space of general terms through a transformation matrix or mapping function. However, the linear transformation based on the transformation matrix showed a limitation in that it only works well in a local range. To overcome this limitation, various types of nonlinear vector alignment methods have been recently proposed. We propose a vector alignment model that matches the embedding space of specific terms to the embedding space of general terms through end-to-end learning that simultaneously learns the autoencoder and regression model. As a result of experiments with R&D documents in the "Healthcare" field, we confirmed the proposed methodology showed superior performance in terms of accuracy compared to the traditional model.