• Title/Summary/Keyword: 도로 벡터

Search Result 1,020, Processing Time 0.024 seconds

Half-pixel Accuracy Motion Estimation Using the Correlation of Motion Vectors (움직임벡터의 상관성을 이용한 반화소단위 움직임 추정 기법)

  • Lee, Bub-Ki;Lee, Kyeong-Hwan;Choi, Jung-Hyun;Kim, Duk-Gyoo
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.35S no.6
    • /
    • pp.119-126
    • /
    • 1998
  • In this paper, we proposed two new methods of half-pel accuracy motion estimation using spatial correlation of half-pel accuracy motion vectors and stochastic characteristics between pixel accuracy motion vectors and half-pel accuracy motion vectors. We confirmed two facts : One is that the probability of having same half-pel accuracy motion vectors with neighboring blocks is high when having same pixel accuracy motion vectors. And the other is that there is high correlation between neighboring half-pel positions. These new half-pel motion estimation technique are shown to decrease the bit rates for vector coding and computational complexity with similar PSNR.

  • PDF

Multivariate Shewhart control charts with variable sampling intervals (가변추출간격을 갖는 다변량 슈하르트 관리도)

  • Cho, Gyo-Young
    • Journal of the Korean Data and Information Science Society
    • /
    • v.21 no.6
    • /
    • pp.999-1008
    • /
    • 2010
  • The objective of this paper is to develop variable sampling interval multivariate control charts that can offer significant performance improvements compared to standard fixed sampling rate multivariate control charts. Most research on multivariate control charts has concentrated on the problem of monitoring the process mean, but here we consider the problem of simultaneously monitoring both the mean and variability of the process.

Sentence Interaction-based Document Similarity Models for News Clustering (뉴스 클러스터링을 위한 문장 간 상호 작용 기반 문서 쌍 유사도 측정 모델들)

  • Choi, Seonghwan;Son, Donghyun;Lee, Hochang
    • Annual Conference on Human and Language Technology
    • /
    • 2020.10a
    • /
    • pp.401-407
    • /
    • 2020
  • 뉴스 클러스터링에서 두 문서 간의 유사도는 클러스터의 특성을 결정하는 중요한 부분 중 하나이다. 전통적인 단어 기반 접근 방법인 TF-IDF 벡터 유사도는 문서 간의 의미적인 유사도를 반영하지 못하고, 기존 딥러닝 기반 접근 방법인 시퀀스 유사도 측정 모델은 문서 단위에서 나타나는 긴 문맥을 반영하지 못하는 문제점을 가지고 있다. 이 논문에서 우리는 뉴스 클러스터링에 적합한 문서 쌍 유사도 모델을 구성하기 위하여 문서 쌍에서 생성되는 다수의 문장 표현들 간의 유사도 정보를 종합하여 전체 문서 쌍의 유사도를 측정하는 네 가지 유사도 모델을 제안하였다. 이 접근 방법들은 하나의 벡터로 전체 문서 표현을 압축하는 HAN (hierarchical attention network)와 같은 접근 방법에 비해 두 문서에서 나타나는 문장들 간의 직접적인 유사도를 통해서 전체 문서 쌍의 유사도를 추정한다. 그리고 기존 접근 방법들인 SVM과 HAN과 제안하는 네 가지 유사도 모델을 통해서 두 문서 쌍 간의 유사도 측정 실험을 하였고, 두 가지 접근 방법에서 기존 접근 방법들보다 높은 성능이 나타나는 것을 확인할 수 있었고, 그래프 기반 접근 방법과 유사한 성능을 보이지만 더 효율적으로 문서 유사도를 측정하는 것을 확인하였다.

  • PDF

An Adaptive Time Delay Estimation Method Based on Canonical Correlation Analysis (정준형 상관 분석을 이용한 적응 시간 지연 추정에 관한 연구)

  • Lim, Jun-Seok;Hong, Wooyoung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.548-555
    • /
    • 2013
  • The localization of sources has a numerous number of applications. To estimate the position of sources, the relative delay between two or more received signals for the direct signal must be determined. Although the generalized cross-correlation method is the most popular technique, an approach based on eigenvalue decomposition (EVD) is also popular one, which utilizes an eigenvector of the minimum eigenvalue. The performance of the eigenvalue decomposition (EVD) based method degrades in the low SNR and the correlated environments, because it is difficult to select a single eigenvector for the minimum eigenvalue. In this paper, we propose a new adaptive algorithm based on Canonical Correlation Analysis (CCA) in order to extend the operation range to the lower SNR and the correlation environments. The proposed algorithm uses the eigenvector corresponding to the maximum eigenvalue in the generalized eigenvalue decomposition (GEVD). The estimated eigenvector contains all the information that we need for time delay estimation. We have performed simulations with uncorrelated and correlated noise for several SNRs, showing that the CCA based algorithm can estimate the time delays more accurately than the adaptive EVD algorithm.

Automatic Drawing and Structural Editing of Road Lane Markings for High-Definition Road Maps (정밀도로지도 제작을 위한 도로 노면선 표시의 자동 도화 및 구조화)

  • Choi, In Ha;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.363-369
    • /
    • 2021
  • High-definition road maps are used as the basic infrastructure for autonomous vehicles, so the latest road information must be quickly reflected. However, the current drawing and structural editing process of high-definition road maps are manually performed. In addition, it takes the longest time to generate road lanes, which are the main construction targets. In this study, the point cloud of the road lane markings, in which color types(white, blue, and yellow) were predicted through the PointNet model pre-trained in previous studies, were used as input data. Based on the point cloud, this study proposed a methodology for automatically drawing and structural editing of the layer of road lane markings. To verify the usability of the 3D vector data constructed through the proposed methodology, the accuracy was analyzed according to the quality inspection criteria of high-definition road maps. In the positional accuracy test of the vector data, the RMSE (Root Mean Square Error) for horizontal and vertical errors were within 0.1m to verify suitability. In the structural editing accuracy test of the vector data, the structural editing accuracy of the road lane markings type and kind were 88.235%, respectively, and the usability was verified. Therefore, it was found that the methodology proposed in this study can efficiently construct vector data of road lanes for high-definition road maps.

A Design of a Robust Vector Quantizer for Wavelet Transformed Images (웨이브렛벤환 영상 부호화용 범용 벡터양자화기의 설계)

  • Do, Jae-Su;Cho, Young-Suk
    • Convergence Security Journal
    • /
    • v.6 no.4
    • /
    • pp.83-90
    • /
    • 2006
  • In this paper, we propose a new design method for a robust vector quantizer that is independent of the statistical characteristics of input images in the wavelet transformed image coding. The conventional vector quantizers have failed to get quality coding results because of the different statistical properties between the image to be quantized and the training sequence for a codebook of the vector quantizer. Therefore, in order to solve this problem, we used a pseudo image as a training sequence to generate a codebook of the vector quantizer; the pseudo image is created by adding correlation coefficient and edge components to uniformly distributed random numbers. We will clearly define the problem of the conventional vector quantizers, which use real images as a training sequence to generate a codebook used, by comparing the conventional methods with the proposed through computer simulation. Also, we will show the proposed vector quantizer yields better coding results.

  • PDF

Design of High Performance Robust Vector Quantizer for Wavelet Transformed Image Coding (웨이브렛 변환 영상 부호화용 고성능 범용 벡터양자화기의 설계)

  • Jung, Tae-Yeon;Do, Je-Su
    • The Transactions of the Korea Information Processing Society
    • /
    • v.7 no.2
    • /
    • pp.529-535
    • /
    • 2000
  • In this paper, we propose a new method of designing the vector quantizer which is robustness to coding results and independent of statistical characteristics of an input image in wavelet transformed image coding processes. The most critical drawback of a conventional vector quantizer is the degradation of coding capability resulted from the discordance between quantizer objective image and statistical characteristics of training sequence which is for generating representing vector. In order to resolve the problem of conventional methods, we use independent random-variables and pseudo image to which image correlation and edge component were added, as a training sequence for generating representing vector. We have done a computer simulation in order to compare coding capability between a vector quantizer designed by the proposed method and one with the conventional method using real image as same as that is objective to coding of training sequence used in codebook generation. The results show the superiority of the proposed vector quantizer method at the aspect of coding capability compared to conventional one. They also clarify the problems of conventional methods.

  • PDF

LLL Algorithm Aided Double Sphere MIMO Detection (LLL 알고리즘 기반 이중 스피어 MIMO 수신기)

  • Jeon, Myeongwoon;Lee, Jungwoo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.377-380
    • /
    • 2012
  • 격자 감소 (lattice reduction) 알고리즘은 주어진 기저 벡터를 직교에 가까운 기저 벡터로 바꾸어 준다. 그중 대표적인 알고리즘으로 LLL (Lenstra, Lenstra & Lovasz) 알고리즘이 있다. 격자 감소 알고리즘을 이용하여 다중 안테나 입출력 (MIMO) 통신시스템의 선형 수신기(linear detector)의 성능을 향상 시킬 수 있다. 스피어 복호 알고리즘 (sphere decoding algorithm)은 MIMO 통신 시스템에서 사용되는 복호기중 최대 우도 복호기 (Maximum Likelihood Detector)와 비슷한 BER(bit error rate)성능을 가지고 복잡도를 줄일 수 있어서 많이 연구되어 왔다. 이때 스피어의 반지름의 설정이나 트리 검색 구조 방식 등은 복잡도에 큰 영향을 미친다. 본 논문에서는 LLL 알고리즘에 기반하여 스피어의 반지름 설정 및 트리 검색 노드 수를 제한하는 방식으로 스피어 복호 알고리즘의 복잡도를 기존 알고리즘에 비해 크게 낮추면서도 비트 오류률 (BER) 성능 열화를 최소한으로 한 알고리즘을 제안하고 전산 실험을 통해 검증한다.

  • PDF

An effective method for comparing similarity of document with Multi-Level alignment (다단계정렬을 활용한 효율적인 문서 유사도 비교법)

  • Seo, Jong-Kyu;Hwang, Hae-Lyen;Cho, Hwan-Gue
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2012.04a
    • /
    • pp.402-405
    • /
    • 2012
  • 문서와 문서간의 유사도들 측정하는 방법 은 크게 지문법 (fingerprint)을 이용한 방법과 서열 정렬(sequence alignment)알고리즘을 이용한 방법이 있다. 두 방법은 각각 속도와 정확도라는 장점을 가지고 있다. 다단계정렬(MLA, Multi-Level alignment))는 이러한 두 방법을 조합하여 탐색 속도와 정확도 사이의 비중을 사용자가 결정할 수 있도록 하기 위한 방법이다.[1] 다단계 정렬은 두 문서를 단위 블록(basis block)로 나누고 블록 간의 벡터를 비교하여 유사도를 측정하게 되는데, 본 연구에서는 초성 추출 및 어간 추출을 통해 단위 블록의 벡터를 빠른 시 간에 생성하고 비교하는 방법과 다단계 탐색을 통해 정확도를 유지하면서 빠르게 유사도를 측정하는 방식에 대해 설명한다. 실험결과 제안 방법을 통해 다단계 정렬 방법을 이용한 대용량 문서 비교의 속도가 2 배 이상 빨라짐을 보인다.

A Classified Space VQ Design for Text-Independent Speaker Recognition (문맥 독립 화자인식을 위한 공간 분할 벡터 양자기 설계)

  • Lim, Dong-Chul;Lee, Hanig-Sei
    • The KIPS Transactions:PartB
    • /
    • v.10B no.6
    • /
    • pp.673-680
    • /
    • 2003
  • In this paper, we study the enhancement of VQ (Vector Quantization) design for text independent speaker recognition. In a concrete way, we present a non-iterative method which makes a vector quantization codebook and this method performs non-iterative learning so that the computational complexity is epochally reduced The proposed Classified Space VQ (CSVQ) design method for text Independent speaker recognition is generalized from Semi-noniterative VQ design method for text dependent speaker recognition. CSVQ contrasts with the existing desiEn method which uses the iterative learninE algorithm for every traininE speaker. The characteristics of a CSVQ design is as follows. First, the proposed method performs the non-iterative learning by using a Classified Space Codebook. Second, a quantization region of each speaker is equivalent for the quantization region of a Classified Space Codebook. And the quantization point of each speaker is the optimal point for the statistical distribution of each speaker in a quantization region of a Classified Space Codebook. Third, Classified Space Codebook (CSC) is constructed through Sample Vector Formation Method (CSVQ1, 2) and Hyper-Lattice Formation Method (CSVQ 3). In the numerical experiment, we use the 12th met-cepstrum feature vectors of 10 speakers and compare it with the existing method, changing the codebook size from 16 to 128 for each Classified Space Codebook. The recognition rate of the proposed method is 100% for CSVQ1, 2. It is equal to the recognition rate of the existing method. Therefore the proposed CSVQ design method is, reducing computational complexity and maintaining the recognition rate, new alternative proposal and CSVQ with CSC can be applied to a general purpose recognition.