• Title/Summary/Keyword: 특징벡터선택

Search Result 169, Processing Time 0.027 seconds

On the Microcomputerized Biomedical Signal Processing System (생분신호 처리용 마이크로컴퓨터에 관한 연구)

  • Kim, Deok-Jin;Kim, Nak-Bin;Kim, Yeong-Cheon
    • Journal of the Korean Institute of Telematics and Electronics
    • /
    • v.19 no.4
    • /
    • pp.31-37
    • /
    • 1982
  • A microcomputerized biomedical signal processing system has been designed and fabricated. Softwares for this system have also been developed to record and analyze ECG and EEG waveforms. In this systenm, the vectorcardiogram of ECG waveforms is formed automatically and displayed on CRT with of her usefull cardiac information. The frequency components of EEG waveform can also be analyzed in this system and the analyzed spectrum is displayed on CRT.

  • PDF

Bit-level Simulator for CORDIC Arithmetic based on carry-save adder (CORDIC 연산기 구현을 위한 Bit-level 하드웨어 시뮬레이션)

  • 이성수;이정아
    • Proceedings of the Korea Database Society Conference
    • /
    • 1995.12a
    • /
    • pp.173-176
    • /
    • 1995
  • 본 논문에서 다루는 내용은 멀티미디어 정보처리시 이용되는 여러 신호 처리용 하드웨어에서 필요로 하는 벡터 트랜스퍼메이션(Vector Transformation)및 오소그날 트랜스퍼메이션(Orthogonal Transformation)에 유용할 뿐만 아니라 여러 형태의 다양한 연산(elementary function including trigonometric functions)을 하나의 단일화된 알고리즘으로 구현할 수 있게 한 CORDIC(Coordinate Rotation Digit Computer)연산[1][2]에 관한 연구이다. CORDIC 연산기를 실현함에 있어서 고속 연산을 위해 고속 가산기(fast adder)로서 CSA(Carry Save Adder)를 선택하는데, 본 논문의 연구 초점은 CORDIC연산기를 하드웨어로 실현하기 전에 Bit-Level의 시뮬레이터를 통하여, CSA의 특징상 발생할 수 있는 문제점어 대해 설명하고, 해결 방법[3]을 이용하여 원하는 값에 접근하는가를 확인하여 다양한 Bit의 조작으로 오차의 정도에 따라 유효한 CORDIC연산기를 실현하는데 도움이 되고자 한다.

  • PDF

Speech Recognition System for Home Automation Using DSP (DSP를 이용한 홈 오토메이션용 음성인식 시스템의 실시간 구현)

  • Kim I-Jae;Kim Jun-sung;Yang Sung-il;Kwon Y.
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.171-174
    • /
    • 2000
  • 본 논문에서는 홈 오토메이션 시스템을 음성인식을 도입하여 설계하였다. 많은 계산량과 방대한 양의 데이터의 처리를 요구하는 음성인식을 DSP(Digital Signal Processor)를 통하여 구현해 보고자 본 연구를 수행하였다. 이를 위해 실시간 끝점검출기를 이용하여 추가의 입력장치가 필요하지 않도록 시스템을 구성하였다. 특징벡터로는 LPC로부터 유도한 10차의 cepstrum과 log 스케일 에너지를 이용하였고, 음소수에 따라 상태의 수를 다르게 구성한 DHMM(Discrete Hidden Marcov Model)을 인식기로 사용하였다. 인식단어는 가정 자동화를 위하여 많이 쓰일 수 있는 10개의 단어를 선택하여 화자 독립으로 인식을 수행하였다. 또한 단어가 인식이 되면 인식된 단어에 대해서 현재의 상태를 음성으로 알려주고 이에 대해 자동으로 실행하도록 시스템을 구성하였다.

  • PDF

A Vector Perturbation Based User Selection for Multi-antenna Downlink Channels (다중안테나 하향채널에서의 Vector Perturbation 기반 사용자 선택 기법)

  • Lee, Byung-Ju;Lim, Chae-Hee;Shim, Byong-Hyo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.6
    • /
    • pp.977-985
    • /
    • 2011
  • Recent works on multiuser transmission techniques have shown that the linear growth of capacity in single user MIMO system can be translated to the multiuser MIMO scenario as well. In this paper, we propose a method pursuing performance gain of vector perturbation in multiuser downlink systems. Instead of employing maximum number of mobile users for communication, we use small part of them as virtual users for improving reliability of users participating communication. By controlling parameters of virtual users including information and perturbation vector, we obtain considerable improvement in the effective SNR, resulting in large gain in bit error rate performance. Simulation results on the realistic multiuser downlink systems show that the proposed method brings substantial performance gain over the standard vector perturbation with marginal overhead in computations.

Positioning Blueprints with Moving Least Squares Optimization (이동최소자승법 최적화를 이용한 도면 배치)

  • Kim, Jong-Hyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.4
    • /
    • pp.1-9
    • /
    • 2017
  • We propose an efficient method to determine the position of blueprint by using a vector field with optimized MLS(Moving Least Squares). Typically, a professional architectural design office takes a long time to work as well as a high processing cost because the designer manually determines the location to place the buildings in a specific area. In order to solve this inefficient problem, we propose a method to automatically determine the location of the blueprint based on the optimized MLS method. In the proposed framework, the designer selects the desired region in the actual city data and calculates the flow of the vector based on the region. Use the optimized MLS method to extract the vector field and determine the amount of rotation of the drawing based on this field. The location of the blueprint determined by the proposed method is very similar to the flow seen when the actual building is located. As a result, the efficiency of the overall architectural design process is further improved by reducing the designer's inefficient workforce.

SNP Marker Selection for Dog Breed Identification from Genotypes of High-density SNP Array and Machine Learning (고밀도 SNP 칩 유전자형 데이터 기계학습 기반 반려견 품종 식별 유전마커 선발)

  • Hyung-Yong Kim;Bong-Hwan Choi;Taeyun Oh;Byeong-Chul Kang
    • Journal of agriculture & life science
    • /
    • v.53 no.4
    • /
    • pp.93-101
    • /
    • 2019
  • Dog (Canis lupus familiaris) is a member of genius Canis that forms part of the wolf-like canids, and it has been evolved to diverse domestic breeds since 100 thousand years ago. Practical dog breed identification has been emerged to important part of pet industry such as genealogical certificates. From 11 dog breeds, 226 dogs and 23K SNP genotypes, we selected minimal SNPs of breed identification using machine learning algorithms including multiclass classification and feature selection. With 100 times of random choice of 70% data for training and 30% testing, we evaluated 9 classifiers' accuracies and 2 methods of feature selection. Linear SVM and PCA weighted feature selection showed the best accuracy of classification. Finally, we selected SNP markers and it could identify 11 breeds with approximately 90% accuracy, when having 40 SNP. This marker set is expected to be useful for dog breed and disease management by integration with disease markers.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

A Passport Recognition and face Verification Using Enhanced fuzzy ART Based RBF Network and PCA Algorithm (개선된 퍼지 ART 기반 RBF 네트워크와 PCA 알고리즘을 이용한 여권 인식 및 얼굴 인증)

  • Kim Kwang-Baek
    • Journal of Intelligence and Information Systems
    • /
    • v.12 no.1
    • /
    • pp.17-31
    • /
    • 2006
  • In this paper, passport recognition and face verification methods which can automatically recognize passport codes and discriminate forgery passports to improve efficiency and systematic control of immigration management are proposed. Adjusting the slant is very important for recognition of characters and face verification since slanted passport images can bring various unwanted effects to the recognition of individual codes and faces. Therefore, after smearing the passport image, the longest extracted string of characters is selected. The angle adjustment can be conducted by using the slant of the straight and horizontal line that connects the center of thickness between left and right parts of the string. Extracting passport codes is done by Sobel operator, horizontal smearing, and 8-neighborhood contour tracking algorithm. The string of codes can be transformed into binary format by applying repeating binary method to the area of the extracted passport code strings. The string codes are restored by applying CDM mask to the binary string area and individual codes are extracted by 8-neighborhood contour tracking algerian. The proposed RBF network is applied to the middle layer of RBF network by using the fuzzy logic connection operator and proposing the enhanced fuzzy ART algorithm that dynamically controls the vigilance parameter. The face is authenticated by measuring the similarity between the feature vector of the facial image from the passport and feature vector of the facial image from the database that is constructed with PCA algorithm. After several tests using a forged passport and the passport with slanted images, the proposed method was proven to be effective in recognizing passport codes and verifying facial images.

  • PDF

Automatic Augmentation Technique of an Autoencoder-based Numerical Training Data (오토인코더 기반 수치형 학습데이터의 자동 증강 기법)

  • Jeong, Ju-Eun;Kim, Han-Joon;Chun, Jong-Hoon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.5
    • /
    • pp.75-86
    • /
    • 2022
  • This study aims to solve the problem of class imbalance in numerical data by using a deep learning-based Variational AutoEncoder and to improve the performance of the learning model by augmenting the learning data. We propose 'D-VAE' to artificially increase the number of records for a given table data. The main features of the proposed technique go through discretization and feature selection in the preprocessing process to optimize the data. In the discretization process, K-means are applied and grouped, and then converted into one-hot vectors by one-hot encoding technique. Subsequently, for memory efficiency, sample data are generated with Variational AutoEncoder using only features that help predict with RFECV among feature selection techniques. To verify the performance of the proposed model, we demonstrate its validity by conducting experiments by data augmentation ratio.

A Wavelet-based Profile Classification using Support Vector Machine (SVM을 이용한 웨이블릿 기반 프로파일 분류에 관한 연구)

  • Kim, Seong-Jun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.5
    • /
    • pp.718-723
    • /
    • 2008
  • Bearing is one of the important mechanical elements used in various industrial equipments. Most of failures occurred during the equipment operation result from bearing defects and breakages. Therefore, monitoring of bearings is essential in preventing equipment breakdowns and reducing unexpected loss. The purpose of this paper is to present an online monitoring method to predict bearing states using vibration signals. Bearing vibrations, which are collected as a form of profile signal, are first analyzed by a discrete wavelet transform. Next, some statistical features are obtained from the resultant wavelet coefficients. In order to select significant ones among them, analysis of variance (ANOVA) is employed in this paper. Statistical features screened in this way are used as input variables to support vector machine (SVM). An hierarchical SVM tree is proposed for dealing with multi-class problems. The result of numerical experiments shows that the proposed SVM tree has a competent performance for classifying bearing fault states.