• Title/Summary/Keyword: 예제

Search Result 2,136, Processing Time 0.028 seconds

A Study on Automatic Expansion of Dialogue Examples Using Logs of a Dialogue System (대화시스템의 로그를 이용한 대화예제의 자동 확충에 관한 연구)

  • Hong, Gum-Won;Lee, Jeong-Hoon;Shin, Jung-Hwi;Lee, Do-Gil;Rim, Hae-Chang
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.257-262
    • /
    • 2009
  • This paper studies an automatic expansion of dialogue examples using the logs of an example-based dialogue system. Conventional approaches to example-based dialogue system manually construct dialogue examples between humans and a Chatbot, which are labor intensive and time consuming. The proposed method automatically classifies natural utterance pairs and adds them into dialogue example database. Experimental results show that lexical, POS and modality features are useful for classifying natural utterance pairs, and prove that the dialogue examples can be automatically expanded using the logs of a dialogue system.

  • PDF

Cluster-Based Selection of Diverse Query Examples for Active Learning (능동적 학습을 위한 군집화 기반의 다양한 복수 문의 예제 선정 방법)

  • Kang, Jae-Ho;Ryu, Kwang-Ryel;Kwon, Hyuk-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.169-189
    • /
    • 2005
  • In order to derive a better classifier with a limited number of training examples, active teaming alternately repeats the querying stage fur category labeling and the subsequent learning stage fur rebuilding the calssifier with the newly expanded training set. To relieve the user from the burden of labeling, especially in an on-line environment, it is important to minimize the number of querying steps as well as the total number of query examples. We can derive a good classifier in a small number of querying steps by using only a small number of examples if we can select multiple of diverse, representative, and ambiguous examples to present to the user at each querying step. In this paper, we propose a cluster-based batch query selection method which can select diverse, representative, and highly ambiguous examples for efficient active learning. Experiments with various text data sets have shown that our method can derive a better classifier than other methods which only take into account the ambiguity as the criterion to select multiple query examples.

  • PDF

Selection of An Initial Training Set for Active Learning Using Cluster-Based Sampling (능동적 학습을 위한 군집기반 초기훈련집합 선정)

  • 강재호;류광렬;권혁철
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.859-868
    • /
    • 2004
  • We propose a method of selecting initial training examples for active learning so that it can reach high accuracy faster with fewer further queries. Our method is based on the assumption that an active learner can reach higher performance when given an initial training set consisting of diverse and typical examples rather than similar and special ones. To obtain a good initial training set, we first cluster examples by using k-means clustering algorithm to find groups of similar examples. Then, a representative example, which is the closest example to the cluster's centroid, is selected from each cluster. After these representative examples are labeled by querying to the user for their categories, they can be used as initial training examples. We also suggest a method of using the centroids as initial training examples by labeling them with categories of corresponding representative examples. Experiments with various text data sets have shown that the active learner starting from the initial training set selected by our method reaches higher accuracy faster than that starting from randomly generated initial training set.

통계 입문과정을 위한 SAS의 적절한 활용

  • 박동준
    • Proceedings of the Korean Operations and Management Science Society Conference
    • /
    • 1994.04a
    • /
    • pp.69-77
    • /
    • 1994
  • 통계학 입문을 배우는 과정에서 여러 종류의 자료와 분석방법을 접하게 된다. 현재 가장 많이 사용되고 있는 통계 소프트웨어 패키지 가운데 하나인 SAS를 사용해서 실제 입문과정에서 나오는 예제들과 그 예제에 대한 SAS 코드를 제시함으로써 이론과 병행되는 입문 과정의 교육에 도움을 주고자 하며 차후 고급 과정에서 복잡하고 많은 양의 자료를 분석할 때 보다 쉽게 SAS를 활용하는데 편리함을 제공하고자 한다. 먼저 입문과정에서 자료를 정리하는데 사용되며 여러가지 도표를 만들어 내는 PROC CHART, PROC UNIVARIATE의 예제가 소개되고, 여러가지 기술적인 (descriptive) 통계량의 값들을 계산하는 PROC MEANS, PROC SUMMARY, PROC UNIVARIATE의 예제를 제시하며, 상관계수와 공분산을 계산하는 PROC CORR, 모집단의 평균을 검정하는데 사용되는 PROC TTEST, PROC MEANS의 예제를 제시하였다는 PROC CORR, 모집단의 평균을 검정하는데 사용되는 PROC TTEST, PROC MEANS의 예제를 제시하였다.

Word Sense Disambiguation From Unlabelled Data (의미 부착이 없는 데이터로부터의 학습을 통한 의미 중의성 해소)

  • 박성배;장병탁;김영택
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2000.04b
    • /
    • pp.330-332
    • /
    • 2000
  • 의미 모호성 해소는 문맥상의 한 단어의 올바른 의미를 밝히는 것으로, 대부분의 자연언어처리 응용에서 가장 중요한 문제 중 하나이다. 말뭉치로부터 얻어진 예제로부터 의미 모호성 해소 방법을 학습하기 위해서는 답이 알려져 있는 대량의 학습 예제가 필요하지만, 답이 알려져 있는 예제를 구하는 일은 사람의 간섭을 필요로 하므로 매우 비싼 작업이다. 본 논문에서는 답이 알려져 있는 학습 예제로 어느 정도 학습한 수, 답이 알려져 있지 않은 예제로 학습을 보충하는 방법을 통해 사람의 간섭을 최소화하였다. 결정트리 학습을 통한 한국어 명사에 대한 의미 결정 실험 결과, 본 논문에서 제안한 방법은 가장 많은 분포를 보이는 의미를 선택하는 경우보다 평균적으로 33.6%의 성능 향상을 보이며, 이는 전체 학습 예제의 답이 모두 알려져 있는 경우와 거의 비슷한 결과이다. 따라서, 한국어와 같이 신뢰할 만한 의미 부착 말뭉치가 없는 경우에 본 논문에서 제시된 방법은 매우 효율적이다.

  • PDF

A Machine Learning based Method for Measuring Inter-utterance Similarity for Example-based Chatbot (예제 기반 챗봇을 위한 기계 학습 기반의 발화 간 유사도 측정 방법)

  • Yang, Min-Chul;Lee, Yeon-Su;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.3021-3027
    • /
    • 2010
  • Example-based chatBot generates a response to user's utterance by searching the most similar utterance in a collection of dialogue examples. Though finding an appropriate example is very important as it is closely related to a response quality, few studies have reported regarding what features should be considered and how to use the features for similar utterance searching. In this paper, we propose a machine learning framework which uses various linguistic features. Experimental results show that simultaneously using both semantic features and lexical features significantly improves the performance, compared to conventional approaches, in terms of 1) the utilization of example database, 2) precision of example matching, and 3) the quality of responses.

Towards General Purpose Korean Paraphrase Sentence Recognition Model (범용의 한국어 패러프레이즈 문장 인식 모델을 위한 연구)

  • Kim, Minho;Hur, Jeong;Lim, Joonho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.450-452
    • /
    • 2021
  • 본 논문은 범용의 한국어 패러프레이즈 문장 인식 모델 개발을 위한 연구를 다룬다. 범용의 목적을 위해서 가장 걸림돌이 되는 부분 중의 하나는 적대적 예제에 대한 강건성이다. 왜냐하면 패러프레이즈 문장 인식에 대한 적대적 예제는 일반 유형의 말뭉치로 학습시킨 인식 모델을 무력화 시킬 수 있기 때문이다. 또한 적대적 예제의 유형이 다양하기 때문에 다양한 유형에 대해서도 대응할 수 있어야 하는 어려운 점이 있다. 본 논문에서는 다양한 적대적 예제 유형과 일반 유형 모두에 대해서 패러프레이즈 문장 여부를 인식할 수 있는 딥 뉴럴 네트워크 모델을 제시하고자 한다.

  • PDF

Generation and Selection of Nominal Virtual Examples for Improving the Classifier Performance (분류기 성능 향상을 위한 범주 속성 가상예제의 생성과 선별)

  • Lee, Yu-Jung;Kang, Byoung-Ho;Kang, Jae-Ho;Ryu, Kwang-Ryel
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.12
    • /
    • pp.1052-1061
    • /
    • 2006
  • This paper presents a method of using virtual examples to improve the classification accuracy for data with nominal attributes. Most of the previous researches on virtual examples focused on data with numeric attributes, and they used domain-specific knowledge to generate useful virtual examples for a particularly targeted learning algorithm. Instead of using domain-specific knowledge, our method samples virtual examples from a naive Bayesian network constructed from the given training set. A sampled example is considered useful if it contributes to the increment of the network's conditional likelihood when added to the training set. A set of useful virtual examples can be collected by repeating this process of sampling followed by evaluation. Experiments have shown that the virtual examples collected this way.can help various learning algorithms to derive classifiers of improved accuracy.

Detecting Adversarial Examples Using Edge-based Classification

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.67-76
    • /
    • 2023
  • Although deep learning models are making innovative achievements in the field of computer vision, the problem of vulnerability to adversarial examples continues to be raised. Adversarial examples are attack methods that inject fine noise into images to induce misclassification, which can pose a serious threat to the application of deep learning models in the real world. In this paper, we propose a model that detects adversarial examples using differences in predictive values between edge-learned classification models and underlying classification models. The simple process of extracting the edges of the objects and reflecting them in learning can increase the robustness of the classification model, and economical and efficient detection is possible by detecting adversarial examples through differences in predictions between models. In our experiments, the general model showed accuracy of {49.9%, 29.84%, 18.46%, 4.95%, 3.36%} for adversarial examples (eps={0.02, 0.05, 0.1, 0.2, 0.3}), whereas the Canny edge model showed accuracy of {82.58%, 65.96%, 46.71%, 24.94%, 13.41%} and other edge models showed a similar level of accuracy also, indicating that the edge model was more robust against adversarial examples. In addition, adversarial example detection using differences in predictions between models revealed detection rates of {85.47%, 84.64%, 91.44%, 95.47%, and 87.61%} for each epsilon-specific adversarial example. It is expected that this study will contribute to improving the reliability of deep learning models in related research and application industries such as medical, autonomous driving, security, and national defense.

A Study on a Prototype Learning Model (프로토타입 학습 모델에 관한 연구)

  • 송두헌
    • Journal of the Korea Computer Industry Society
    • /
    • v.2 no.2
    • /
    • pp.151-156
    • /
    • 2001
  • We describe a new representation for learning concepts that differs from the traditional decision tree and rule induction algorithms. Our algorithm PROLEARN learns one or more prototype per class and follows instance based classification with them. Prototype here differs from psychological term in that we can have more than one prototype per concept and also differs from other instance based algorithms since the prototype is a "ficticious ideal example". We show that PROLEARN is as good as the traditional machine learning algorithms but much move stable than them in an environment that has noise or changing training set, what we call 'stability’.tability’.

  • PDF