• Title/Summary/Keyword: 코드벡터

Search Result 265, Processing Time 0.024 seconds

Advanced CBS (Cost Breakdown Structure) Code Search Technology Applying NLP (Natural Language Processing) of Artificial Intelligence (인공지능 자연어 처리 기법을 이용한 개선된 내역코드 탐색방법)

  • Kim, HanDo;Nam, JeongYong
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.719-731
    • /
    • 2024
  • For efficient construction management, linking BIM with schedule and cost is essential, but there are limits to the application of 5D BIM due to the difficulty in disassembling thousands of WBS and CBS. To solve this problem, a standardized WBS-CBS set is configured in advance, and when a new construction project occurs, the CBS in the BOQ is automatically linked to the WBS when a text most similar to it is found among the standard CBS (Public Procurement Service standard construction code) of the already linked set. A method was used to compare the text similarity of CBS more efficiently using artificial intelligence natural language processing techniques. Firstly, we created a civil term dictionary (CTD) that organized the words used in civil projects and assigned numerical values, tokenized the text of all CBS into words defined in the dictionary, converted them into TF-IDF vectors, and determined them by cosine similarity. Additionally, the search success rate increased to nearly 70 % by considering CBS' hierarchical structure and changing keywords. The threshold value for judging similarity was 0.62 (1: perfect match, 0: no match).

A study on the development of severity-adjusted mortality prediction model for discharged patient with acute stroke using machine learning (머신러닝을 이용한 급성 뇌졸중 퇴원 환자의 중증도 보정 사망 예측 모형 개발에 관한 연구)

  • Baek, Seol-Kyung;Park, Jong-Ho;Kang, Sung-Hong;Park, Hye-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.11
    • /
    • pp.126-136
    • /
    • 2018
  • The purpose of this study was to develop a severity-adjustment model for predicting mortality in acute stroke patients using machine learning. Using the Korean National Hospital Discharge In-depth Injury Survey from 2006 to 2015, the study population with disease code I60-I63 (KCD 7) were extracted for further analysis. Three tools were used for the severity-adjustment of comorbidity: the Charlson Comorbidity Index (CCI), the Elixhauser comorbidity index (ECI), and the Clinical Classification Software (CCS). The severity-adjustment models for mortality prediction in patients with acute stroke were developed using logistic regression, decision tree, neural network, and support vector machine methods. The most common comorbid disease in stroke patients were hypertension, uncomplicated (43.8%) in the ECI, and essential hypertension (43.9%) in the CCS. Among the CCI, ECI, and CCS, CCS had the highest AUC value. CCS was confirmed as the best severity correction tool. In addition, the AUC values for variables of CCS including main diagnosis, gender, age, hospitalization route, and existence of surgery were 0.808 for the logistic regression analysis, 0.785 for the decision tree, 0.809 for the neural network and 0.830 for the support vector machine. Therefore, the best predictive power was achieved by the support vector machine technique. The results of this study can be used in the establishment of health policy in the future.

A Block Disassembly Technique using Vectorized Edges for Synthesizing Mask Layouts (마스크 레이아웃 합성을 위한 벡터화한 변을 사용한 블록 분할 기법)

  • Son, Yeong-Chan;Ju, Ri-A;Yu, Sang-Dae
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.12
    • /
    • pp.75-84
    • /
    • 2001
  • Due to the high density of integration in current integrated circuit layouts, circuit elements must be designed to minimize the effect of parasitic elements and thereby minimize the factors which can degrade circuit performance. Thus, before making a chip, circuit designers should check whether the extracted netlist is correct, and verify from a simulation whether the circuit performance satisfies the design specifications. In this paper, we propose a new block disassembly technique which can extract the geometric parameters of stacked MOSFETs and the distributed RCs of layout blocks. After applying this to the layout of a folded-cascode CMOS operational amplifier, we verified the connectivity and the effect of the components by simulating the extracted netlist with HSPICE.

  • PDF

A Research on the Vector Search Algorithm for the PIV Flow Analysis of image data with large dynamic range (입자의 이동거리가 큰 영상데이터의 PIV 유동 해석을 위한 속도벡터 추적 알고리즘의 연구)

  • Kim Sung Kyun
    • 한국전산유체공학회:학술대회논문집
    • /
    • 1998.11a
    • /
    • pp.13-18
    • /
    • 1998
  • The practical use of the particle image velocimetry(PIV), a whole-field velocity measurement method, requires the use of fast, reliable, computer-based methods for tracking velocity vectors. The full search block matching, the most widely studied and applied technique both in area of PIV and Image Coding and Compression, is computationally costly. Many less expensive alternatives have been proposed mostly in the area of Image Coding and Compression. Among others, TSS, NTSS, HPM are introduced for the past PIV analysis, and found to be successful. But, these algorithms are based on small dynamic range, 7 pixels/frame in maximum displacement. To analyze the images with large displacement, Even and Odd field image separation and a simple version of multi-resolution hierarchical procedures are introduced in this paper. Comparison with other algorithms are summarized. A Results of application to the turbulent backward step flow shows the improvement of new algorithm.

  • PDF

A Rate and Distortion Estimation Scheme for HEVC Hardware Implementation (하드웨어 구현에 적합한 HEVC 의 CU 단위 율 및 왜곡 예측 방법)

  • Lee, Busmhik;Kim, Munchurl
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2014.11a
    • /
    • pp.15-17
    • /
    • 2014
  • 본 논문에서는 하드웨어의 제한된 자원을 이용하여 HEVC 코덱을 구현할 때 DCT 와 엔트로피 부호화를 사용하지 않고 율 및 왜곡값을 예측하여 고효율의 부호화를 수행하는 방법에 대하여 제안한다. HEVC 는 기존의 부호화기에 비하여 계층적 부호화 구조와 함께 큰 블록 크기를 갖는 DCT 와 엔트로피 부호화를 반복적으로 수행하기 때문에 하드웨어 구현 시 그 복잡도가 매우 크게 증가한다. 먼저 DCT 는 하다마드변환 행렬과 또 다른 정규 직교 변환 행렬의 곱으로 표현될 수 있는 성질을 이용하여 부호화 변환 시 생성된 하드마드변환 행렬에 저복잡도의 정규 직교 변환 행렬을 곱하여 DCT 변환 계수를 생성한 후 변환 및 양자화를 수행한다. 왜곡값의 경우, 이 때 생성된 양자화 계수와 변환 계수 간의 차이를 변환도메인에서 제곱합을 이용하여 계산하여 역변환을 생략함으로써 복잡도를 감소시킬 수 있다. 또한 텍스처에 대한 비트율 예측은 각 CU 블록내의 양자화 계수의 수를 더하여 계산하여 엔트로피를 수행하지 않고 예측할 수 있다. 그리고 비 텍스처에 대한 비트율 예측의 경우 움직임벡터의 비트에 대한 Pseudo CABAC 코드를 수행하여 예측할 수 있다. 이러한 저 복잡도의 텍스처 및 비텍스처 비트와 왜곡을 예측함으로써 하다마드변환만을 이용하여 부호화하였을 때에 비해 최대 33%의 비트율 감소를 얻을 수 있었다.

  • PDF

Development of a Read-time Voice Dialing System Using Discrete Hidden Markov Models (이산 HM을 이용한 실시간 음성인식 다이얼링 시스템 개발)

  • Lee, Se-Woong;Choi, Seung-Ho;Lee, Mi-Suk;Kim, Hong-Kook;Oh, Kwang-Cheol;Kim, Ki-Chul;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.13 no.1E
    • /
    • pp.89-95
    • /
    • 1994
  • This paper describes development of a real-time voice dialing system which can recognize around one hundred word vocabularies in speaker independent mode. The voice recognition algorithm in this system is implemented on a DSP board with a telephone interface plugged in an IBM PC AT/486. In the DSP board, procedures for feature extraction, vector quantization(VQ), and end-point detection are performed simultaneously in every 10 msec frame interval to satisfy real-time constraints after detecting the word starting point. In addition, we optimize the VQ codebook size and the end-point detection procedure to reduce recognition time and memory requirement. The demonstration system has been displayed in MOBILAB of the Korean Mobile Telecom at the Taejon EXPO'93.

  • PDF

A Hashing Method Using PCA-based Clustering (PCA 기반 군집화를 이용한 해슁 기법)

  • Park, Cheong Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.6
    • /
    • pp.215-218
    • /
    • 2014
  • In hashing-based methods for approximate nearest neighbors(ANN) search, by mapping data points to k-bit binary codes, nearest neighbors are searched in a binary embedding space. In this paper, we present a hashing method using a PCA-based clustering method, Principal Direction Divisive Partitioning(PDDP). PDDP is a clustering method which repeatedly partitions the cluster with the largest variance into two clusters by using the first principal direction. The proposed hashing method utilizes the first principal direction as a projective direction for binary coding. Experimental results demonstrate that the proposed method is competitive compared with other hashing methods.

Joint Optimization of Source Codebooks and Channel Modulation Signal for AWGN Channels (AWGN 채널에서 VQ 부호책과 직교 진폭변조신호 좌표의 공동 최적화)

  • 한종기;박준현
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6C
    • /
    • pp.580-593
    • /
    • 2003
  • A joint design scheme has been proposed to optimize the source encoder and the modulation signal constellation based on the minimization of the end-to-end distortion including both the quantization error and channel distortion. The proposed scheme first optimizes the VQ codebook for a fixed modulation signal set, and then the modulation signals for the fixed VQ codebook. These two steps are iteratively repeated until they reach a local optimum solution. It has been shown that the performance of the proposed system can be enhanced by employing a new efficient mapping scheme between codevectors and modulation signals. Simulation results show that a jointly optimized system based on the proposed algorithms outperforms the conventional system based on a conventional QAM modulation signal set and the VQ codebook designed for a noiseless channel.

Isolated-Word Speech Recognition using Variable-Frame Length Normalization (가변프레임 길이정규화를 이용한 단어음성인식)

  • Sin, Chan-Hu;Lee, Hui-Jeong;Park, Byeong-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.6 no.4
    • /
    • pp.21-30
    • /
    • 1987
  • Length normalization by variable frame size is proposed as a novel approach to length normalization to solve the problem that the length variation of spoken word results in a lowing of recognition accuracy. This method has the advantage of curtailment of recognition time in the recognition stage because it can reduce the number of frames constructing a word compared with length normalization by a fixed frame size. In this paper, variable frame length normalization is applied to multisection vector quantization and the efficiency of this method is estimated in the view of recognition time and accuracy through practical recognition experiments.

  • PDF

A New Unsupervised Learning Network and Competitive Learning Algorithm Using Relative Similarity (상대유사도를 이용한 새로운 무감독학습 신경망 및 경쟁학습 알고리즘)

  • 류영재;임영철
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.10 no.3
    • /
    • pp.203-210
    • /
    • 2000
  • In this paper, we propose a new unsupervised learning network and competitive learning algorithm for pattern classification. The proposed network is based on relative similarity, which is similarity measure between input data and cluster group. So, the proposed network and algorithm is called relative similarity network(RSN) and learning algorithm. According to definition of similarity and learning rule, structure of RSN is designed and pseudo code of the algorithm is described. In general pattern classification, RSN, in spite of deletion of learning rate, resulted in the identical performance with those of WTA, and SOM. While, in the patterns with cluster groups of unclear boundary, or patterns with different density and various size of cluster groups, RSN produced more effective classification than those of other networks.

  • PDF