• Title/Summary/Keyword: vector computer

Search Result 2,006, Processing Time 0.029 seconds

Orthogonal Reference Vectors Selection Method of Subspace Interference Alignment (부분공간 간섭 정렬에서 합용량 향상을 위한 직교 레퍼런스 벡터 선정 방법)

  • Seo, Jong-Pil;Kim, Hyun-Soo;Ahn, Jae-Jin;Chung, Jae-Hak
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.5A
    • /
    • pp.457-463
    • /
    • 2011
  • This paper proposes theorthogonal reference vectors selection method of the subspace interference alignment. The proposed method selects multiple orthogonal reference vectors instead of using one reference vector for all users at the same time. The proposed scheme selects a reference vector which maximizes a sum-rate for a certain cell, generates orthogonal vectors to the previous selected vector and selects the one of orthogonal vectors whose sum rate is maximized for each cell. Larger channel gain and sum-rate than the previous method can be obtained by selection degree of freedom. The computer simulation demonstrates the proposed method gives higher sum-rate compared with that of the previous reference vector selection method.

A Study on Optimum Subband Filter Bank Design Using Vector Quantizer (벡터 양자화기를 사용한 최적의 부대역 필터 뱅크 구현에 관한 연구)

  • Jee, Innho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.1
    • /
    • pp.107-113
    • /
    • 2017
  • This paper provides a new approach for modeling of vector quantizer(VQ) followed by analysis and design of subband codecs with imbedded VQ's. We compute the mean squared reconstruction error(MSE) which depend on N the number of entries in each codebook, k the length of each codeword, and on the filter bank(FB) coefficients in subband codecs. We show that the optimum M-band filter bank structure in presence of pdf-optimized vector quantizer can be designed by a suitable choice of equivalent scalar quantizer parameters. Specific design examples have been developed for two different classes of filter banks, paraunitary and the biorthogonal FB and the 2 channel case. These theoretical results are confirmed by Monte Carlo simulation.

Composite Differential Evolution Aided Channel Allocation in OFDMA Systems with Proportional Rate Constraints

  • Sharma, Nitin;Anpalagan, Alagan
    • Journal of Communications and Networks
    • /
    • v.16 no.5
    • /
    • pp.523-533
    • /
    • 2014
  • Orthogonal frequency division multiple access (OFDMA) is a promising technique, which can provide high downlink capacity for the future wireless systems. The total capacity of OFDMA can be maximized by adaptively assigning subchannels to the user with the best gain for that subchannel, with power subsequently distributed by water-filling. In this paper, we propose the use of composite differential evolution (CoDE) algorithm to allocate the subchannels. The CoDE algorithm is population-based where a set of potential solutions evolves to approach a near-optimal solution for the problem under study. CoDE uses three trial vector generation strategies and three control parameter settings. It randomly combines them to generate trial vectors. In CoDE, three trial vectors are generated for each target vector unlike other differential evolution (DE) techniques where only a single trial vector is generated. Then the best one enters the next generation if it is better than its target vector. It is shown that the proposed method obtains higher sum capacities as compared to that obtained by previous works, with comparable computational complexity.

Fluency Scoring of English Speaking Tests for Nonnative Speakers Using a Native English Phone Recognizer

  • Jang, Byeong-Yong;Kwon, Oh-Wook
    • Phonetics and Speech Sciences
    • /
    • v.7 no.2
    • /
    • pp.149-156
    • /
    • 2015
  • We propose a new method for automatic fluency scoring of English speaking tests spoken by nonnative speakers in a free-talking style. The proposed method is different from the previous methods in that it does not require the transcribed texts for spoken utterances. At first, an input utterance is segmented into a phone sequence by using a phone recognizer trained by using native speech databases. For each utterance, a feature vector with 6 features is extracted by processing the segmentation results of the phone recognizer. Then, fluency score is computed by applying support vector regression (SVR) to the feature vector. The parameters of SVR are learned by using the rater scores for the utterances. In computer experiments with 3 tests taken by 48 Korean adults, we show that speech rate, phonation time ratio, and smoothed unfilled pause rate are best for fluency scoring. The correlation of between the rater score and the SVR score is shown to be 0.84, which is higher than the correlation of 0.78 among raters. Although the correlation is slightly lower than the correlation of 0.90 when the transcribed texts are given, it implies that the proposed method can be used as a preprocessing tool for fluency evaluation of speaking tests.

Variable Block Size Motion Estimation Techniques for The Motion Sequence Coding (움직임 영상 부호화를 위한 가변 블록 크기 움직임 추정 기법)

  • 김종원;이상욱
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.30B no.4
    • /
    • pp.104-115
    • /
    • 1993
  • The motion compensated coding (MCC) technique, which exploits the temporal redundancies in the moving images with the motion estimation technique,is one of the most popular techniques currently used. Recently, a variable block size(VBS) motion estimation scheme has been utilized to improve the performance of the motion compensted coding. This scheme allows large blocks to the used when smaller blocks provide little gain, saving rates for areas containing more complex motion. Hence, a new VBS motion estimation scheme with a hierarchical structure is proposed in this paper, in order to combine the motion vector coding technique efficiently. Topmost level motion vector, which is obtained by the gain/cost motion estimation technique with selective motion prediction method, is always transmitted. Thus, the hierarchical VBS motion estimation scheme can efficiently exploit the redundancies among neighboring motion vectors, providing an efficient motion vector encoding scheme. Also, a restricted search with respect to the topmost level motion vector enables more flexible and efficient motion estimation for the remaining lower level blocks. Computer simulations on the high resolution image sequence show that, the VBS motion estimation scheme provides a performance improvement of 0.6~0.7 dB, in terms of PSNR, compared to the fixed block size motion estimation scheme.

  • PDF

XML Documents Clustering Technique Based on Bit Vector (비트벡터에 기반한 XML 문서 군집화 기법)

  • Kim, Woo-Saeng
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.5
    • /
    • pp.10-16
    • /
    • 2010
  • XML is increasingly important in data exchange and information management. A large amount of efforts have been spent in developing efficient techniques for accessing, querying, and storing XML documents. In this paper, we propose a new method to cluster XML documents efficiently. A bit vector which represents a XML document is proposed to cluster the XML documents. The similarity between two XML documents is measured by a bit-wise AND operation between two corresponding bit vectors. The experiment shows that the clusters are formed well and efficiently when a bit vector is used for the feature of a XML document.

Design of Spatial Data Compression Methods for Mobile Vector Map Services (모바일 벡터 지도 서비스를 위한 공간 데이터 압축 기법의 설계)

  • 최진오
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.358-362
    • /
    • 2004
  • According to the rapid advance of computer and communication techniques, the request of mobile internet services is highly increasing. However, the main obstacles for mobile vector map service environments, are large data volume and narrow wireless bandwidth. Among the many possible solutions, spatial data compression technique may contribute to reduce the load of bandwidth and client response time. This thesis proposes two methods for spatial data compression. The one is relative coordinates transformation method, and the other is client coordinates transformation method. And, this thesis also proposes the system architecture for experiments. The two compression methods could be evaluated the compression effect and the response time.

  • PDF

Post-processing of vector quantized images using the projection onto quantization constraint set (양자화 제약 집합에 투영을 이용한 벡터 양자화된 영상의 후처리)

  • 김동식;박섭형;이종석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.22 no.4
    • /
    • pp.662-674
    • /
    • 1997
  • In order to post process the vector-quantized images employing the theory of projections onto convex sets or the constrained minimization technique, the the projector onto QCS(quantization constraint set) as well as the filter that smoothes the lock boundaries should be investigated theoretically. The basic idea behind the projection onto QCS is to prevent the processed data from diverging from the original quantization region in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in order to reduce the blurring artifacts caused by a filtering operation. However, since the Voronoi regions in the vector quantization are arbitrarilly shaped unless the vector quantization has a structural code book, the implementation of the projection onto QCS is very complicate. This paper mathematically analyzes the projection onto QCS from the viewpoit of minimizing the mean square error. Through the analysis, it has been revealed that the projection onto a subset of the QCS yields lower distortion than the projection onto QCS does. Searching for an optimal constraint set is not easy and the operation of the projector is complicate, since the shape of optimal constraint set is dependent on the statistical characteristics between the filtered and original images. Therefore, we proposed a hyper-cube as a constraint set that enables a simple projection. It sill be also shown that a proper filtering technique followed by the projection onto the hyper-cube can reduce the quantization distortion by theory and experiment.

  • PDF

A Tetrahedral Decomposition Method for Computing Tangent Curves of 3D Vector Fields (3차원 벡터필드 탄젠트 곡선 계산을 위한 사면체 분해 방법)

  • Jung, Il-Hong
    • Journal of Digital Contents Society
    • /
    • v.16 no.4
    • /
    • pp.575-581
    • /
    • 2015
  • This paper presents the development of certain highly efficient and accurate method for computing tangent curves for three-dimensional vector fields. Unlike conventional methods, such as Runge-Kutta method, for computing tangent curves which produce only approximations, the method developed herein produces exact values on the tangent curves based upon piecewise linear variation over a tetrahedral domain in 3D. This new method assumes that the vector field is piecewise linearly defined over a tetrahedron in 3D domain. It is also required to decompose the hexahedral cell into five or six tetrahedral cells for three-dimensional vector fields. The critical points can be easily found by solving a simple linear system for each tetrahedron. This method is to find exit points by producing a sequence of points on the curve with the computation of each subsequent point based on the previous. Because points on the tangent curves are calculated by the explicit solution for each tetrahedron, this new method provides correct topology in visualizing 3D vector fields.

An ICA-Based Subspace Scanning Algorithm to Enhance Spatial Resolution of EEG/MEG Source Localization (뇌파/뇌자도 전류원 국지화의 공간분해능 향상을 위한 독립성분분석 기반의 부분공간 탐색 알고리즘)

  • Jung, Young-Jin;Kwon, Ki-Woon;Im, Chang-Hwan
    • Journal of Biomedical Engineering Research
    • /
    • v.31 no.6
    • /
    • pp.456-463
    • /
    • 2010
  • In the present study, we proposed a new subspace scanning algorithm to enhance the spatial resolution of electroencephalography (EEG) and magnetoencephalography(MEG) source localization. Subspace scanning algorithms, represented by the multiple signal classification (MUSIC) algorithm and the first principal vector (FINE) algorithm, have been widely used to localize asynchronous multiple dipolar sources in human cerebral cortex. The conventional MUSIC algorithm used principal component analysis (PCA) to extract the noise vector subspace, thereby having difficulty in discriminating two or more closely-spaced cortical sources. The FINE algorithm addressed the problem by using only a part of the noise vector subspace, but there was no golden rule to determine the number of noise vectors. In the present work, we estimated a non-orthogonal signal vector set using independent component analysis (ICA) instead of using PCA and performed the source scanning process in the signal vector subspace, not in the noise vector subspace. Realistic 2D and 3D computer simulations, which compared the spatial resolutions of various algorithms under different noise levels, showed that the proposed ICA-MUSIC algorithm has the highest spatial resolution, suggesting that it can be a useful tool for practical EEG/MEG source localization.