• Title/Summary/Keyword: 대표 벡터

Search Result 300, Processing Time 0.032 seconds

Effective Drought Prediction Based on Machine Learning (머신러닝 기반 효과적인 가뭄예측)

  • Kim, Kyosik;Yoo, Jae Hwan;Kim, Byunghyun;Han, Kun-Yeun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.326-326
    • /
    • 2021
  • 장기간에 걸쳐 넓은 지역에 대해 발생하는 가뭄을 예측하기위해 많은 학자들의 기술적, 학술적 시도가 있어왔다. 본 연구에서는 복잡한 시계열을 가진 가뭄을 전망하는 방법 중 시나리오에 기반을 둔 가뭄전망 방법과 실시간으로 가뭄을 예측하는 비시나리오 기반의 방법 등을 이용하여 미래 가뭄전망을 실시했다. 시나리오에 기반을 둔 가뭄전망 방법으로는, 3개월 GCM(General Circulation Model) 예측 결과를 바탕으로 2009년도 PDSI(Palmer Drought Severity Index) 가뭄지수를 산정하여 가뭄심도에 대한 단기예측을 실시하였다. 또, 통계학적 방법과 물리적 모델(Physical model)에 기반을 둔 확정론적 수치해석 방법을 이용하여 비시나리오 기반 가뭄을 예측했다. 기존 가뭄을 통계학적 방법으로 예측하기 위해서 시도된 대표적인 방법으로 ARIMA(Autoregressive Integrated Moving Average) 모델의 예측에 대한 한계를 극복하기위해 서포트 벡터 회귀(support vector regression, SVR)와 웨이블릿(wavelet neural network) 신경망을 이용해 SPI를 측정하였다. 최적모델구조는 RMSE(root mean square error), MAE(mean absolute error) 및 R(correlation Coefficient)를 통해 선정하였고, 1-6개월의 선행예보 시간을 갖고 가뭄을 전망하였다. 그리고 SPI를 이용하여, 마코프 연쇄(Markov chain) 및 대수선형모델(log-linear model)을 적용하여 SPI기반 가뭄예측의 정확도를 검증하였으며, 터키의 아나톨리아(Anatolia) 지역을 대상으로 뉴로퍼지모델(Neuro-Fuzzy)을 적용하여 1964-2006년 기간의 월평균 강수량과 SPI를 바탕으로 가뭄을 예측하였다. 가뭄 빈도와 패턴이 불규칙적으로 변하며 지역별 강수량의 양극화가 심화됨에 따라 가뭄예측의 정확도를 높여야 하는 요구가 커지고 있다. 본 연구에서는 복잡하고 비선형성으로 이루어진 가뭄 패턴을 기상학적 가뭄의 정도를 나타내는 표준강수증발지수(SPEI, Standardized Precipitation Evapotranspiration Index)인 월SPEI와 일SPEI를 기계학습모델에 적용하여 예측개선 모형을 개발하고자 한다.

  • PDF

Performance Evaluation of Multilinear Regression Empirical Formula and Machine Learning Model for Prediction of Two-dimensional Transverse Dispersion Coefficient (다중선형회귀경험식과 머신러닝모델의 2차원 횡 분산계수 예측성능 평가)

  • Lee, Sun Mi;Park, Inhwan
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2022.05a
    • /
    • pp.172-172
    • /
    • 2022
  • 분산계수는 하천에서 오염물질의 혼합능을 파악할 수 있는 대표적인 인자이다. 특히 하수처리장 방류수 혼합예측과 같이 횡 방향 혼합에 대한 예측이 중요한 경우, 하천의 지형적, 수리학적 특성을 고려한 2차원 횡 분산계수의 결정이 필요하다. 2차원 횡 분산계수의 결정을 위해 기존 연구에서는 추적자실험결과로부터 경험식을 만들어 횡 분산계수 산정에 사용해왔다. 회귀분석을 통한 경험식 산정을 위해서는 충분한 데이터가 필요하지만, 2차원 추적자 실험 건수가 충분치 않아 신뢰성 높은 경험식 산정이 어려운 상황이다. 따라서 본 연구에서는 SMOTE기법을 이용하여 횡분산계수 실험데이터를 증폭시켜 이로부터 횡 분산계수 경험식을 산정하고자 한다. 또한 다중선형회귀분석을 통해 도출된 경험식의 한계를 보완하기 위해 다양한 머신러닝 기법을 적용하고, 횡 분산계수 산정에 적합한 머신러닝 기법을 제안하고자 한다. 기존 추적자실험 데이터로부터 하폭 대 수심비, 유속 대 마찰유속비, 횡 분산계수 데이터 셋을 수집하였으며, SMOTE 알고리즘의 적용을 통해 회귀분석과 머신러닝 기법 적용에 필요한 데이터그룹을 생성했다. 새롭게 생성된 데이터 셋을 포함하여 다중선형회귀분석을 통해 횡 분산계수 경험식을 결정하였으며, 새로 제안한 경험식과 기존 경험식에 대한 정확도를 비교했다. 또한 다중선형회귀분석을 통해 결정된 경험식은 횡 분산계수 예측범위에 한계를 보였기 때문에 머신러닝기법을 적용하여 다중선형회귀분석에 대한 예측성능을 평가했다. 이를 위해 머신러닝 기법으로서 서포트 벡터 머신 회귀(SVR), K근접이웃 회귀(KNN-R), 랜덤 포레스트 회귀(RFR)를 활용했다. 세 가지 머신러닝 기법을 통해 도출된 횡 분산계수와 경험식으로부터 결정된 횡 분산계수를 비교하여 예측 성능을 비교했다. 이를 통해 제한된 실험데이터 셋으로부터 2차원 횡 분산계수 산정을 위한 데이터 전처리 기법 및 횡 분산계수 산정에 적합한 머신러닝 절차와 최적 학습기법을 도출했다.

  • PDF

A Study of BWE-Prediction-Based Split-Band Coding Scheme (BWE 예측기반 대역분할 부호화기에 대한 연구)

  • Song, Geun-Bae;Kim, Austin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.27 no.6
    • /
    • pp.309-318
    • /
    • 2008
  • In this paper, we discuss a method for efficiently coding the high-band signal in the split-band coding approach where an input signal is divided into two bands and then each band may be encoded separately. Generally, and especially through the research on the artificial bandwidth extension (BWE), it is well known that there is a correlation between the two bands to some degree. Therefore, some coding gain could be achieved by utilizing the correlation. In the BWE-prediction-based coding approach, using a simple linear BWE function may not yield optimal results because the correlation has a non-linear characteristic. In this paper, we investigate the new coding scheme more in details. A few representative BWE functions including linear and non-linear ones are investigated and compared to find a suitable one for the coding purpose. In addition, it is also discussed whether there are some additional gains in combining the BWE coder with the predictive vector quantizer which exploits the temporal correlation.

Psychology analyzing system using spectrum component density ratio of EEG based on BCI-TAT (EEG 대역별 스펙트럼 활성 비를 활용한 BCI-TAT 기반 심리 분석 시스템)

  • Shin, Jeon-Hoon;Jin, Sang-Hyeon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.11 no.2
    • /
    • pp.112-124
    • /
    • 2010
  • Studies that can find resolutions to problems of subjective psychiatric analysis must be performed and indeed they are in the process. However there still lies many problems in researches of mentality examination, which should be the foundation of finding potential resolutions. One of the biggest problems in the conventional system is that there are many different opinions by psychiatrists depending on their educations and experiences. There is no objective standard on the subjects and there is no effective psychiatric analysis method. Also, the characteristic of such examinations is that it cannot be applied to disabilities, foreigners and infants alyce the examination is ch examinconversation. The objective of this atudy is to standardize TAT(Thematic Apperception Test)analysiBallken index method so that subjective data from the examination can be excluded and the examination thus maklysithe examination objectified. Furthermore, objective result and patients' brain wave pattern is read with BCI(Brain Computer Interface) ch exaTherenvironment to Alsare it to brain wave characteristics vectors to reate brain-wave characteristics vector DB. Therefore, such DB can be utilize durlysithe examination and diagnosis to reate objective examination method and standardized diagnosis system. Thus, mentality examination can be performed only with brain-wave scans with BCI based TAT system.

A New Memory-based Learning using Dynamic Partition Averaging (동적 분할 평균을 이용한 새로운 메모리 기반 학습기법)

  • Yih, Hyeong-Il
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.456-462
    • /
    • 2008
  • The classification is that a new data is classified into one of given classes and is one of the most generally used data mining techniques. Memory-Based Reasoning (MBR) is a reasoning method for classification problem. MBR simply keeps many patterns which are represented by original vector form of features in memory without rules for reasoning, and uses a distance function to classify a test pattern. If training patterns grows in MBR, as well as size of memory great the calculation amount for reasoning much have. NGE, FPA, and RPA methods are well-known MBR algorithms, which are proven to show satisfactory performance, but those have serious problems for memory usage and lengthy computation. In this paper, we propose DPA (Dynamic Partition Averaging) algorithm. it chooses partition points by calculating GINI-Index in the entire pattern space, and partitions the entire pattern space dynamically. If classes that are included to a partition are unique, it generates a representative pattern from partition, unless partitions relevant partitions repeatedly by same method. The proposed method has been successfully shown to exhibit comparable performance to k-NN with a lot less number of patterns and better result than EACH system which implements the NGE theory and FPA, and RPA.

A Semantic Text Model with Wikipedia-based Concept Space (위키피디어 기반 개념 공간을 가지는 시멘틱 텍스트 모델)

  • Kim, Han-Joon;Chang, Jae-Young
    • The Journal of Society for e-Business Studies
    • /
    • v.19 no.3
    • /
    • pp.107-123
    • /
    • 2014
  • Current text mining techniques suffer from the problem that the conventional text representation models cannot express the semantic or conceptual information for the textual documents written with natural languages. The conventional text models represent the textual documents as bag of words, which include vector space model, Boolean model, statistical model, and tensor space model. These models express documents only with the term literals for indexing and the frequency-based weights for their corresponding terms; that is, they ignore semantical information, sequential order information, and structural information of terms. Most of the text mining techniques have been developed assuming that the given documents are represented as 'bag-of-words' based text models. However, currently, confronting the big data era, a new paradigm of text representation model is required which can analyse huge amounts of textual documents more precisely. Our text model regards the 'concept' as an independent space equated with the 'term' and 'document' spaces used in the vector space model, and it expresses the relatedness among the three spaces. To develop the concept space, we use Wikipedia data, each of which defines a single concept. Consequently, a document collection is represented as a 3-order tensor with semantic information, and then the proposed model is called text cuboid model in our paper. Through experiments using the popular 20NewsGroup document corpus, we prove the superiority of the proposed text model in terms of document clustering and concept clustering.

Recognition of Superimposed Patterns with Selective Attention based on SVM (SVM기반의 선택적 주의집중을 이용한 중첩 패턴 인식)

  • Bae, Kyu-Chan;Park, Hyung-Min;Oh, Sang-Hoon;Choi, Youg-Sun;Lee, Soo-Young
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.123-136
    • /
    • 2005
  • We propose a recognition system for superimposed patterns based on selective attention model and SVM which produces better performance than artificial neural network. The proposed selective attention model includes attention layer prior to SVM which affects SVM's input parameters. It also behaves as selective filter. The philosophy behind selective attention model is to find the stopping criteria to stop training and also defines the confidence measure of the selective attention's outcome. Support vector represents the other surrounding sample vectors. The support vector closest to the initial input vector in consideration is chosen. Minimal euclidean distance between the modified input vector based on selective attention and the chosen support vector defines the stopping criteria. It is difficult to define the confidence measure of selective attention if we apply common selective attention model, A new way of doffing the confidence measure can be set under the constraint that each modified input pixel does not cross over the boundary of original input pixel, thus the range of applicable information get increased. This method uses the following information; the Euclidean distance between an input pattern and modified pattern, the output of SVM, the support vector output of hidden neuron that is the closest to the initial input pattern. For the recognition experiment, 45 different combinations of USPS digit data are used. Better recognition performance is seen when selective attention is applied along with SVM than SVM only. Also, the proposed selective attention shows better performance than common selective attention.

Abnormal Crowd Behavior Detection via H.264 Compression and SVDD in Video Surveillance System (H.264 압축과 SVDD를 이용한 영상 감시 시스템에서의 비정상 집단행동 탐지)

  • Oh, Seung-Geun;Lee, Jong-Uk;Chung, Yongw-Ha;Park, Dai-Hee
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.21 no.6
    • /
    • pp.183-190
    • /
    • 2011
  • In this paper, we propose a prototype system for abnormal sound detection and identification which detects and recognizes the abnormal situations by means of analyzing audio information coming in real time from CCTV cameras under surveillance environment. The proposed system is composed of two layers: The first layer is an one-class support vector machine, i.e., support vector data description (SVDD) that performs rapid detection of abnormal situations and alerts to the manager. The second layer classifies the detected abnormal sound into predefined class such as 'gun', 'scream', 'siren', 'crash', 'bomb' via a sparse representation classifier (SRC) to cope with emergency situations. The proposed system is designed in a hierarchical manner via a mixture of SVDD and SRC, which has desired characteristics as follows: 1) By fast detecting abnormal sound using SVDD trained with only normal sound, it does not perform the unnecessary classification for normal sound. 2) It ensures a reliable system performance via a SRC that has been successfully applied in the field of face recognition. 3) With the intrinsic incremental learning capability of SRC, it can actively adapt itself to the change of a sound database. The experimental results with the qualitative analysis illustrate the efficiency of the proposed method.

A Vision Transformer Based Recommender System Using Side Information (부가 정보를 활용한 비전 트랜스포머 기반의 추천시스템)

  • Kwon, Yujin;Choi, Minseok;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.119-137
    • /
    • 2022
  • Recent recommendation system studies apply various deep learning models to represent user and item interactions better. One of the noteworthy studies is ONCF(Outer product-based Neural Collaborative Filtering) which builds a two-dimensional interaction map via outer product and employs CNN (Convolutional Neural Networks) to learn high-order correlations from the map. However, ONCF has limitations in recommendation performance due to the problems with CNN and the absence of side information. ONCF using CNN has an inductive bias problem that causes poor performances for data with a distribution that does not appear in the training data. This paper proposes to employ a Vision Transformer (ViT) instead of the vanilla CNN used in ONCF. The reason is that ViT showed better results than state-of-the-art CNN in many image classification cases. In addition, we propose a new architecture to reflect side information that ONCF did not consider. Unlike previous studies that reflect side information in a neural network using simple input combination methods, this study uses an independent auxiliary classifier to reflect side information more effectively in the recommender system. ONCF used a single latent vector for user and item, but in this study, a channel is constructed using multiple vectors to enable the model to learn more diverse expressions and to obtain an ensemble effect. The experiments showed our deep learning model improved performance in recommendation compared to ONCF.

Registration Method between High Resolution Optical and SAR Images (고해상도 광학영상과 SAR 영상 간 정합 기법)

  • Jeon, Hyeongju;Kim, Yongil
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.5
    • /
    • pp.739-747
    • /
    • 2018
  • Integration analysis of multi-sensor satellite images is becoming increasingly important. The first step in integration analysis is image registration between multi-sensor. SIFT (Scale Invariant Feature Transform) is a representative image registration method. However, optical image and SAR (Synthetic Aperture Radar) images are different from sensor attitude and radiation characteristics during acquisition, making it difficult to apply the conventional method, such as SIFT, because the radiometric characteristics between images are nonlinear. To overcome this limitation, we proposed a modified method that combines the SAR-SIFT method and shape descriptor vector DLSS(Dense Local Self-Similarity). We conducted an experiment using two pairs of Cosmo-SkyMed and KOMPSAT-2 images collected over Daejeon, Korea, an area with a high density of buildings. The proposed method extracted the correct matching points when compared to conventional methods, such as SIFT and SAR-SIFT. The method also gave quantitatively reasonable results for RMSE of 1.66m and 2.45m over the two pairs of images.