• Title/Summary/Keyword: 지지기반벡터

Search Result 62, Processing Time 0.023 seconds

Support Vector Machine based Ballistic Limit Velocity Measurement for Small Caliber Projectile (SVM 기반 소화기 방호한계속도 측정방법 연구)

  • Kim, Jong-Hwan;Baik, Seungwon;Yoon, Byengjo;Jo, Sungsik
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.5
    • /
    • pp.629-637
    • /
    • 2016
  • This paper presents a ballistic limit velocity measurement using the support vector machine that classifies two classes, the partial penetration and the complete penetration, by generating a linear separating hyperplane that equally divides the classes. For the ballistic limit velocity measurement, the previous methods(MIL-STD-662F and NIJ-STD-0101.06) have required a large number of experiments that caused high cost and time. However, the proposed method is not only flexible, requiring 0.85 ~ 4.8 times fewer experiments but also reliable, providing less than 2 % difference in results compared to the previous methods. For its validation, live fire experiments were conducted using various thickness SS400 iron plates as a target and two different types of live bullets such as 5.56 mm M193 and 7.62 mm M80.

A MA-plot-based Feature Selection by MRMR in SVM-RFE in RNA-Sequencing Data

  • Kim, Chayoung
    • The Journal of Korean Institute of Information Technology
    • /
    • v.16 no.12
    • /
    • pp.25-30
    • /
    • 2018
  • It is extremely lacking and urgently required that the method of constructing the Gene Regulatory Network (GRN) from RNA-Sequencing data (RNA-Seq) because of Big-Data and GRN in Big-Data has obtained substantial observation as the interactions among relevant featured genes and their regulations. We propose newly the computational comparative feature patterns selection method by implementing a minimum-redundancy maximum-relevancy (MRMR) filter the support vector machine-recursive feature elimination (SVM-RFE) with Intensity-dependent normalization (DEGSEQ) as a preprocessor for emphasizing equal preciseness in RNA-seq in Big-Data. We found out the proposed algorithm might be more scalable and convenient because of all libraries in R package and be more improved in terms of the time consuming in Big-Data and minimum-redundancy maximum-relevancy of a set of feature patterns at the same time.

Effective Mood Classification Method based on Music Segments (부분 정보에 기반한 효과적인 음악 무드 분류 방법)

  • Park, Gun-Han;Park, Sang-Yong;Kang, Seok-Joong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.3
    • /
    • pp.391-400
    • /
    • 2007
  • According to the recent advances in multimedia computing, storage and searching technology have made large volume of music contents become prevalent. Also there has been increasing needs for the study on efficient categorization and searching technique for music contents management. In this paper, a new classifying method using the local information of music content and music tone feature is proposed. While the conventional classifying algorithms are based on entire information of music content, the algorithm proposed in this paper focuses on only the specific local information, which can drastically reduce the computing time without losing classifying accuracy. In order to improve the classifying accuracy, it uses a new classification feature based on music tone. The proposed method has been implemented as a part of MuSE (Music Search/Classification Engine) which was installed on various systems including commercial PDAs and PCs.

  • PDF

A Study on Automatic Text Categorization of Web-Based Query Using Synonymy List (유사어 사전을 이용한 웹기반 질의문의 자동 범주화에 관한 연구)

  • Nam, Young-Joon;Kim, Gyu-Hwan
    • Journal of Information Management
    • /
    • v.35 no.4
    • /
    • pp.81-105
    • /
    • 2004
  • In this study, the way of the automatic text categorization on web-based query was implemented. X2 methods based on the Supported Vector Machine were used to test the efficiency of text categorization on queries. This test is carried out by the model using the Synonymy List. 713 synonyms were extracted manually from the tested documents. As the result of this test, the precision ratio and the recall ratio were decreased by -0.01% and by 8.53%, respectively whether the synonyms were assigned or not. It also shows that the Value of F1 Measure was increased by 4.58%. The standard deviation between the recall and precision ratio was improve by 18.39%.

Two-Phase Shallow Semantic Parsing based on Partial Syntactic Parsing (부분 구문 분석 결과에 기반한 두 단계 부분 의미 분석 시스템)

  • Park, Kyung-Mi;Mun, Young-Song
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.85-92
    • /
    • 2010
  • A shallow semantic parsing system analyzes the relationship that a syntactic constituent of the sentence has with a predicate. It identifies semantic arguments representing agent, patient, instrument, etc. of the predicate. In this study, we propose a two-phase shallow semantic parsing model which consists of the identification phase and the classification phase. We first find the boundary of semantic arguments from partial syntactic parsing results, and then assign appropriate semantic roles to the identified semantic arguments. By taking the sequential two-phase approach, we can alleviate the unbalanced class distribution problem, and select the features appropriate for each task. Experiments show the relative contribution of each phase on the test data.

A Document Sentiment Classification System Based on the Feature Weighting Method Improved by Measuring Sentence Sentiment Intensity (문장 감정 강도를 반영한 개선된 자질 가중치 기법 기반의 문서 감정 분류 시스템)

  • Hwang, Jae-Won;Ko, Young-Joong
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.6
    • /
    • pp.491-497
    • /
    • 2009
  • This paper proposes a new feature weighting method for document sentiment classification. The proposed method considers the difference of sentiment intensities among sentences in a document. Sentiment features consist of sentiment vocabulary words and the sentiment intensity scores of them are estimated by the chi-square statistics. Sentiment intensity of each sentence can be measured by using the obtained chi-square statistics value of each sentiment feature. The calculated intensity values of each sentence are finally applied to the TF-IDF weighting method for whole features in the document. In this paper, we evaluate the proposed method using support vector machine. Our experimental results show that the proposed method performs about 2.0% better than the baseline which doesn't consider the sentiment intensity of a sentence.

Web Attack Classification Model Based on Payload Embedding Pre-Training (페이로드 임베딩 사전학습 기반의 웹 공격 분류 모델)

  • Kim, Yeonsu;Ko, Younghun;Euom, Ieckchae;Kim, Kyungbaek
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.4
    • /
    • pp.669-677
    • /
    • 2020
  • As the number of Internet users exploded, attacks on the web increased. In addition, the attack patterns have been diversified to bypass existing defense techniques. Traditional web firewalls are difficult to detect attacks of unknown patterns.Therefore, the method of detecting abnormal behavior by artificial intelligence has been studied as an alternative. Specifically, attempts have been made to apply natural language processing techniques because the type of script or query being exploited consists of text. However, because there are many unknown words in scripts and queries, natural language processing requires a different approach. In this paper, we propose a new classification model which uses byte pair encoding (BPE) technology to learn the embedding vector, that is often used for web attack payloads, and uses an attention mechanism-based Bi-GRU neural network to extract a set of tokens that learn their order and importance. For major web attacks such as SQL injection, cross-site scripting, and command injection attacks, the accuracy of the proposed classification method is about 0.9990 and its accuracy outperforms the model suggested in the previous study.

Classification of ratings in online reviews (온라인 리뷰에서 평점의 분류)

  • Choi, Dongjun;Choi, Hosik;Park, Changyi
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.845-854
    • /
    • 2016
  • Sentiment analysis or opinion mining is a technique of text mining employed to identify subjective information or opinions of an individual from documents in blogs, reviews, articles, or social networks. In the literature, only a problem of binary classification of ratings based on review texts in an online review. However, because there can be positive or negative reviews as well as neutral reviews, a multi-class classification will be more appropriate than the binary classification. To this end, we consider the multi-class classification of ratings based on review texts. In the preprocessing stage, we extract words related with ratings using chi-square statistic. Then the extracted words are used as input variables to multi-class classifiers such as support vector machines and proportional odds model to compare their predictive performances.

Super Resolution by Learning Sparse-Neighbor Image Representation (Sparse-Neighbor 영상 표현 학습에 의한 초해상도)

  • Eum, Kyoung-Bae;Choi, Young-Hee;Lee, Jong-Chan
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.12
    • /
    • pp.2946-2952
    • /
    • 2014
  • Among the Example based Super Resolution(SR) techniques, Neighbor embedding(NE) has been inspired by manifold learning method, particularly locally linear embedding. However, the poor generalization of NE decreases the performance of such algorithm. The sizes of local training sets are always too small to improve the performance of NE. We propose the Learning Sparse-Neighbor Image Representation baesd on SVR having an excellent generalization ability to solve this problem. Given a low resolution image, we first use bicubic interpolation to synthesize its high resolution version. We extract the patches from this synthesized image and determine whether each patch corresponds to regions with high or low spatial frequencies. After the weight of each patch is obtained by our method, we used to learn separate SVR models. Finally, we update the pixel values using the previously learned SVRs. Through experimental results, we quantitatively and qualitatively confirm the improved results of the proposed algorithm when comparing with conventional interpolation methods and NE.

Development of Simulation Software for EEG Signal Accuracy Improvement (EEG 신호 정확도 향상을 위한 시뮬레이션 소프트웨어 개발)

  • Jeong, Haesung;Lee, Sangmin;Kwon, Jangwoo
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.10 no.3
    • /
    • pp.221-228
    • /
    • 2016
  • In this paper, we introduce our simulation software for EEG signal accuracy improvement. Users can check and train own EEG signal accuracy using our simulation software. Subjects were shown emotional imagination condition with landscape photography and logical imagination condition with a mathematical problem to subject. We use that EEG signal data, and apply Independent Component Analysis algorithm for noise removal. So we can have beta waves(${\beta}$, 14-30Hz) data through Band Pass Filter. We extract feature using Root Mean Square algorithm and That features are classified through Support Vector Machine. The classification result is 78.21% before EEG signal accuracy improvement training. but after successive training, the result is 91.67%. So user can improve own EEG signal accuracy using our simulation software. And we are expecting efficient use of BCI system based EEG signal.