• Title/Summary/Keyword: 대용량 분류

Search Result 243, Processing Time 0.026 seconds

Face Detection Based on Incremental Learning from Very Large Size Training Data (대용량 훈련 데이타의 점진적 학습에 기반한 얼굴 검출 방법)

  • 박지영;이준호
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.7
    • /
    • pp.949-958
    • /
    • 2004
  • race detection using a boosting based algorithm requires a very large size of face and nonface data. In addition, the fact that there always occurs a need for adding additional training data for better detection rates demands an efficient incremental teaming algorithm. In the design of incremental teaming based classifiers, the final classifier should represent the characteristics of the entire training dataset. Conventional methods have a critical problem in combining intermediate classifiers that weight updates depend solely on the performance of individual dataset. In this paper, for the purpose of application to face detection, we present a new method to combine an intermediate classifier with previously acquired ones in an optimal manner. Our algorithm creates a validation set by incrementally adding sampled instances from each dataset to represent the entire training data. The weight of each classifier is determined based on its performance on the validation set. This approach guarantees that the resulting final classifier is teamed by the entire training dataset. Experimental results show that the classifier trained by the proposed algorithm performs better than by AdaBoost which operates in batch mode, as well as by ${Learn}^{++}$.

Application of Gene Algorithm for the development of efficient clustering system (효율적인 군집화 시스템의 개발을 위해 유전자 알고리즘의 적용)

  • Hong, Gil-Dong;Kim, Cheol-Soo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2003.05a
    • /
    • pp.277-280
    • /
    • 2003
  • 현재 많은 관심의 대상이 되고 있는 데이터 마이닝은 대용량의 데이터베이스로부터 일정한 패턴을 분류하여 지식의 형태로 추출하는 작업이다. 데이터 마이닝의 대표적인 기법인 군집화는 군집내의 유사성을 최대화하고 군집들간의 유사성을 최소화시키도록 데이터 집합을 분할하는 것이다. 데이터 마이닝에서 군집화는 대용량 데이터를 다루기 때문에 원시 데이터에 대한 접근횟수를 줄이고 알고리즘이 다루어야 할 데이터 구조의 크기를 줄이는 군집화 기법이 활발하게 사용된다. 그런데 기존의 군집화 알고리즘은 잡음에 매우 민감하고, local minima에 반응한다. 또한 사전에 군집의 개수를 미리 결정해야 하고, initialization 값에 다라 군집의 성능이 좌우되는 문제점이 있다. 본 연구에서는 유전자 알고리즘을 이용하여 자동으로 군집의 개수를 결정하는 군집화 알고리즘을 제안하고, 여기서 제시하는 적합도 함수의 최적화된 군집을 찾아내어 조금더 효율적인 알고리즘을 만들어 대용량 데이터를 다루는 데이터 마이닝에 적용해 보려한다.

  • PDF

Design of Moa Contents Curation Service System Based on Incremental Learning Technology (점진적 학습 기반 모아 콘텐츠 큐레이션 서비스 시스템 설계)

  • Lee, Jeong-won;Min, Byung-Won;Oh, Yong-Sun
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2018.05a
    • /
    • pp.401-402
    • /
    • 2018
  • 콘텐츠 큐레이션 서비스를 위해서 대용량 데이터를 학습하는 과정에서 발생하는 메모리부족 문제, 학습소요시간 문제 등을 해결하기 위한 "대용량 문서학습을 위한 동적학습 파이프라인 생성기술 중 빅데이터 마이닝을 위한 점진적 학습 모델" 기술이 필요하며, 본 논문에서 제안한 콘텐츠 큐레이션 서비스는 온라인상의 수많은 콘텐츠들 중 개인의 주관이나 관점에 따라 관련 콘텐츠들을 수집, 정리하고 편집하여 이용자와 관련이 있거나 좋아할 만한 콘텐츠를 제공하는 서비스이다. 본 논문에서 설계된 모아 큐레이션 서비스는 대용량의 문서를 학습함에 있어서 메모리 부족 문제, 학습 소요시간 문제 등을 해결하기 위해 학습데이터의 용량 제한이 없는 문서를 자유롭게 학습하고 부분적인 자질추가/변경 시에 변경요소만을 추가 반영할 수 있는 범용적이고 일반적인 분류기의 구조설계 방법 등을 제시하였다.

  • PDF

Migration Policies of a Unified Index for Moving Objects Databases (이동체 데이터베이스를 위한 통합 색인의 이주 정책)

  • 정지원;안경환;서영덕;홍봉희
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.112-114
    • /
    • 2004
  • 무선 통신 기술의 발달로 인하여 LBS(Location Based System)와 같은 새로운 이동체 관련 서비스가 생겨나고 있다. 위치 기반 서비스에서 클라이언트인 이동체들이 주기적으로 보고하는 위치 데이터를 실시간으로 처리하기 위해 서버에서는 메인 메모리 DBMS를 유지하는 것이 필요한데, 데이터의 양이 계속적으로 증가하는 특성으로 인해 메인 메모리의 공간이 부족할 때 데이터를 디스크로 옮기는 시스템 설계가 필요하다. 그러나 기존의 연구는 대용량 이동체 환경에서의 색인 이주를 위한 노드 선택 정책과 이주를 위해 선택된 노드들의 디스크 배치 정책을 통합하여 나타내지 못하였다. 그러므로 대용량 이동체 데이터베이스 시스템 환경에 적합한 이주 정책들에 대한 연구가 필요하다. 이 논문에서는 대용량 이동체 데이터베이스 환경을 고려한 노드 선택 정책과 디스크 배치 정책을 분류하고 새로운 이주 정 책을 제시한다. 노드 선택 정책으로는 질의 성능을 위해서 캐쉬의 LRU(Least Recently Used) 정책을 이용한 변형된 LRU정책을 제시하고, 삽입 우선 정책으로는 이동체 색인인 R-tree의 삽입 알고리즘을 역이용한 정책을 제시한다. 또한 이주되는 노드들에 대한 디스크 페이지 배치가 시스템의 질의 성능에 영향을 미치므로 이를 고려한 디스크 배치 정책을 제시한다.

  • PDF

A Study on Drainage Capability of Large Capacity Outlet and Spillway of Dams in Korea (한국댐의 대용량 배수시설 및 Spillway 배수능력에 관한 조사연구)

  • 이원환
    • Water for future
    • /
    • v.11 no.2
    • /
    • pp.43-53
    • /
    • 1978
  • Synopsis: This study has systemized the results of construction and classification with 656 large dams in Korea which were defined in ICOLD provision. Especially, checking up the drainage capability of large capacity outlets and its of spillway, this paper suggests the planning of outflow discharge with large capacity outlets and spillway in future. The results of this study are following as; 1. The classification by purposes in Korea shows that irrigation dams are 94% in rate(607 dams), jydropower and multipurpose dams are 2% (14 dams), municipal and industrial water supply dams are 4% (26 dams). 2. In design of proposed outflow discharge, spillways of irrigation dams were selected outflow discharge on 100 years return period, those of municipal and industrial water supply dmas 200 years and those of hydropower and multipurpose dams 500 years or 1000 years. 3. Emergency spillway should be considered in the fields of disaster prevention engineering and the rank of return periods for the emergency proposed out flow discharge was suggested. 4. Some of problems are suggested for this subject in future.

  • PDF

A Text Mining-based Intrusion Log Recommendation in Digital Forensics (디지털 포렌식에서 텍스트 마이닝 기반 침입 흔적 로그 추천)

  • Ko, Sujeong
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.2 no.6
    • /
    • pp.279-290
    • /
    • 2013
  • In digital forensics log files have been stored as a form of large data for the purpose of tracing users' past behaviors. It is difficult for investigators to manually analysis the large log data without clues. In this paper, we propose a text mining technique for extracting intrusion logs from a large log set to recommend reliable evidences to investigators. In the training stage, the proposed method extracts intrusion association words from a training log set by using Apriori algorithm after preprocessing and the probability of intrusion for association words are computed by combining support and confidence. Robinson's method of computing confidences for filtering spam mails is applied to extracting intrusion logs in the proposed method. As the results, the association word knowledge base is constructed by including the weights of the probability of intrusion for association words to improve the accuracy. In the test stage, the probability of intrusion logs and the probability of normal logs in a test log set are computed by Fisher's inverse chi-square classification algorithm based on the association word knowledge base respectively and intrusion logs are extracted from combining the results. Then, the intrusion logs are recommended to investigators. The proposed method uses a training method of clearly analyzing the meaning of data from an unstructured large log data. As the results, it complements the problem of reduction in accuracy caused by data ambiguity. In addition, the proposed method recommends intrusion logs by using Fisher's inverse chi-square classification algorithm. So, it reduces the rate of false positive(FP) and decreases in laborious effort to extract evidences manually.

Statistical Approach to Sentiment Classification using MapReduce (맵리듀스를 이용한 통계적 접근의 감성 분류)

  • Kang, Mun-Su;Baek, Seung-Hee;Choi, Young-Sik
    • Science of Emotion and Sensibility
    • /
    • v.15 no.4
    • /
    • pp.425-440
    • /
    • 2012
  • As the scale of the internet grows, the amount of subjective data increases. Thus, A need to classify automatically subjective data arises. Sentiment classification is a classification of subjective data by various types of sentiments. The sentiment classification researches have been studied focused on NLP(Natural Language Processing) and sentiment word dictionary. The former sentiment classification researches have two critical problems. First, the performance of morpheme analysis in NLP have fallen short of expectations. Second, it is not easy to choose sentiment words and determine how much a word has a sentiment. To solve these problems, this paper suggests a combination of using web-scale data and a statistical approach to sentiment classification. The proposed method of this paper is using statistics of words from web-scale data, rather than finding a meaning of a word. This approach differs from the former researches depended on NLP algorithms, it focuses on data. Hadoop and MapReduce will be used to handle web-scale data.

  • PDF

Large-Scale Text Classification with Deep Neural Networks (깊은 신경망 기반 대용량 텍스트 데이터 분류 기술)

  • Jo, Hwiyeol;Kim, Jin-Hwa;Kim, Kyung-Min;Chang, Jeong-Ho;Eom, Jae-Hong;Zhang, Byoung-Tak
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.322-327
    • /
    • 2017
  • The classification problem in the field of Natural Language Processing has been studied for a long time. Continuing forward with our previous research, which classifies large-scale text using Convolutional Neural Networks (CNN), we implemented Recurrent Neural Networks (RNN), Long-Short Term Memory (LSTM) and Gated Recurrent Units (GRU). The experiment's result revealed that the performance of classification algorithms was Multinomial Naïve Bayesian Classifier < Support Vector Machine (SVM) < LSTM < CNN < GRU, in order. The result can be interpreted as follows: First, the result of CNN was better than LSTM. Therefore, the text classification problem might be related more to feature extraction problem than to natural language understanding problems. Second, judging from the results the GRU showed better performance in feature extraction than LSTM. Finally, the result that the GRU was better than CNN implies that text classification algorithms should consider feature extraction and sequential information. We presented the results of fine-tuning in deep neural networks to provide some intuition regard natural language processing to future researchers.

Parallel Processing of K-means Clustering Algorithm for Unsupervised Classification of Large Satellite Imagery (대용량 위성영상의 무감독 분류를 위한 K-means 군집화 알고리즘의 병렬처리)

  • Han, Soohee
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.3
    • /
    • pp.187-194
    • /
    • 2017
  • The present study introduces a method to parallelize k-means clustering algorithm for fast unsupervised classification of large satellite imagery. Known as a representative algorithm for unsupervised classification, k-means clustering is usually applied to a preprocessing step before supervised classification, but can show the evident advantages of parallel processing due to its high computational intensity and less human intervention. Parallel processing codes are developed by using multi-threading based on OpenMP. In experiments, a PC of 8 multi-core integrated CPU is involved. A 7 band and 30m resolution image from LANDSAT 8 OLI and a 8 band and 10m resolution image from Sentinel-2A are tested. Parallel processing has shown 6 time faster speed than sequential processing when using 10 classes. To check the consistency of parallel and sequential processing, centers, numbers of classified pixels of classes, classified images are mutually compared, resulting in the same results. The present study is meaningful because it has proved that performance of large satellite processing can be significantly improved by using parallel processing. And it is also revealed that it easy to implement parallel processing by using multi-threading based on OpenMP but it should be carefully designed to control the occurrence of false sharing.