• Title/Summary/Keyword: 분산 기계 학습

Search Result 90, Processing Time 0.025 seconds

K-Means Clustering in the PCA Subspace using an Unified Measure (통합 측도를 사용한 주성분해석 부공간에서의 k-평균 군집화 방법)

  • Yoo, Jae-Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.4
    • /
    • pp.703-708
    • /
    • 2022
  • K-means clustering is a representative clustering technique. However, there is a limitation in not being able to integrate the performance evaluation scale and the method of determining the minimum number of clusters. In this paper, a method for numerically determining the minimum number of clusters is introduced. The explained variance is presented as an integrated measure. We propose that the k-means clustering method should be performed in the subspace of the PCA in order to simultaneously satisfy the minimum number of clusters and the threshold of the explained variance. It aims to present an explanation in principle why principal component analysis and k-means clustering are sequentially performed in pattern recognition and machine learning.

A study on identifying factors of poultry complex odor using machine learning models (기계학습 모형을 이용한 양계 복합 악취의 요인 파악에 대한 연구)

  • Doyun Kim;Jaehoon Kim;Junsu Park;Siyoung Seo;Jaeeun Kim;Byeong-jun Yang;Tae-Young Heo
    • The Korean Journal of Applied Statistics
    • /
    • v.37 no.4
    • /
    • pp.485-497
    • /
    • 2024
  • With the development of modern society, the number of livestock is increasing, and the corresponding odor is recognized as a serious social problem. In particular, the consumption of poultry meat, such as chicken, duck, and turkey, is expected to rise steeply, making odor problems near poultry farms. To address the problem, it is important to understand the influence of odor components on the complex odor. In this study, the odor data obtained from poultry farms were used to predict the complex odor using machine learning models and analyze the influence of the components. Furthermore, we analyze the differences in the amount of the odor components at the site boundary, compost site, inside the farm, and outside the farm using analysis of variance. The analysis showed that ammonia, trimethylamine, dimethyldisulfide, and acetaldehyde have a high effect on the complex odor. In particular, ammonia, trimethylamine, and acetaldehyde have different amount of the occurence by the location.

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

k-NN Query Optimization Scheme Based on Machine Learning Using a DNN Model (DNN 모델을 이용한 기계 학습 기반 k-최근접 질의 처리 최적화 기법)

  • We, Ji-Won;Choi, Do-Jin;Lee, Hyeon-Byeong;Lim, Jong-Tae;Lim, Hun-Jin;Bok, Kyoung-Soo;Yoo, Jae-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.10
    • /
    • pp.715-725
    • /
    • 2020
  • In this paper, we propose an optimization scheme for a k-Nearest Neighbor(k-NN) query, which finds k objects closest to the query in the high dimensional feature vectors. The k-NN query is converted and processed into a range query based on the range that is likely to contain k data. In this paper, we propose an optimization scheme using DNN model to derive an optimal range that can reduce processing cost and accelerate search speed. The entire system of the proposed scheme is composed of online and offline modules. In the online module, a query is actually processed when it is issued from a client. In the offline module, an optimal range is derived for the query by using the DNN model and is delivered to the online module. It is shown through various performance evaluations that the proposed scheme outperforms the existing schemes.

Machine Learning Based Prediction of Bitcoin Mining Difficulty (기계학습 기반 비트코인 채굴 난이도 예측 연구)

  • Lee, Joon-won;Kwon, Taekyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.1
    • /
    • pp.225-234
    • /
    • 2019
  • Bitcoin is a cryptocurrency with characteristics such as de-centralization and distributed ledger, and these features are maintained through a mining system called "proof of work". In the mining system, mining difficulty is adjusted to keep the block generation time constant. However, Bitcoin's current method to update mining difficulty does not reflect the future hash power, so the block generation time can not be kept constant and the error occurs between designed time and real time. This increases the inconsistency between block generation and real world and causes problems such as not meeting deadlines of transaction and exposing the vulnerability to coin-hopping attack. Previous studies to keep the block generation time constant still have the error. In this paper, we propose a machine-learning based method to reduce the error. By training with the previous hash power, we predict the future hash power and adjust the mining difficulty. Our experimental result shows that the error rate can be reduced by about 36% compared with the current method.

Prediction of Soil Moisture with Open Source Weather Data and Machine Learning Algorithms (공공 기상데이터와 기계학습 모델을 이용한 토양수분 예측)

  • Jang, Young-bin;Jang, Ik-hoon;Choe, Young-chan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.1
    • /
    • pp.1-12
    • /
    • 2020
  • As one of the essential resources in the agricultural process, soil moisture has been carefully managed by predicting future changes and deficits. In recent years, statistics and machine learning based approach to predict soil moisture has been preferred in academia for its generalizability and ease of use in the field. However, little is known that machine learning based soil moisture prediction is applicable in the situation of South Korea. In this sense, this paper aims to examine 1) whether publicly available weather data generated in South Korea has sufficient quality to predict soil moisture, 2) which machine learning algorithm would perform best in the situation of South Korea, and 3) whether a single machine learning model could be generally applicable in various regions. We used various machine learning methods such as Support Vector Machines (SVM), Random Forest (RF), Extremely Randomized Trees (ET), Gradient Boosting Machines (GBM), and Deep Feedforward Network (DFN) to predict future soil moisture in Andong, Boseong, Cheolwon, Suncheon region with open source weather data. As a result, GBM model showed the lowest prediction error in every data set we used (R squared: 0.96, RMSE: 1.8). Furthermore, GBM showed the lowest variance of prediction error between regions which indicates it has the highest generalizability.

Management Automation Technique for Maintaining Performance of Machine Learning-Based Power Grid Condition Prediction Model (기계학습 기반 전력망 상태예측 모델 성능 유지관리 자동화 기법)

  • Lee, Haesung;Lee, Byunsung;Moon, Sangun;Kim, Junhyuk;Lee, Heysun
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.413-418
    • /
    • 2020
  • It is necessary to manage the prediction accuracy of the machine learning model to prevent the decrease in the performance of the grid network condition prediction model due to overfitting of the initial training data and to continuously utilize the prediction model in the field by maintaining the prediction accuracy. In this paper, we propose an automation technique for maintaining the performance of the model, which increases the accuracy and reliability of the prediction model by considering the characteristics of the power grid state data that constantly changes due to various factors, and enables quality maintenance at a level applicable to the field. The proposed technique modeled a series of tasks for maintaining the performance of the power grid condition prediction model through the application of the workflow management technology in the form of a workflow, and then automated it to make the work more efficient. In addition, the reliability of the performance result is secured by evaluating the performance of the prediction model taking into account both the degree of change in the statistical characteristics of the data and the level of generalization of the prediction, which has not been attempted in the existing technology. Through this, the accuracy of the prediction model is maintained at a certain level, and further new development of predictive models with excellent performance is possible. As a result, the proposed technique not only solves the problem of performance degradation of the predictive model, but also improves the field utilization of the condition prediction model in a complex power grid system.

A Study on the Application of Machine Learning in Literary Texts - Focusing on Rule Selection for Speaker Directive Analysis - (문학 텍스트의 머신러닝 활용방안 연구 - 화자 지시어 분석을 위한 규칙 선별을 중심으로 -)

  • Kwon, Kyoungah;Ko, Ilju;Lee, Insung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.313-323
    • /
    • 2021
  • The purpose of this study is to propose rules that can identify the speaker referred by the speaker directive in the text for the realization of a machine learning-based virtual character using a literary text. Through previous studies, we found that when applying literary texts to machine learning, the machine did not properly discriminate the speaker without any specific rules for the analysis of speaker directives such as other names, nicknames, pronouns, and so on. As a way to solve this problem, this study proposes 'nine rules for finding a speaker indicated by speaker directives (including pronouns)': location, distance, pronouns, preparatory subject/preparatory object, quotations, number of speakers, non-characters directives, word compound form, dispersion of speaker names. In order to utilize characters within a literary text as virtual ones, the learning text must be presented in a machine-comprehensible way. We expect that the rules suggested in this study will reduce trial and error that may occur when using literary texts for machine learning, and enable smooth learning to produce qualitatively excellent learning results.

Prediction of Cryptocurrency Price Trend Using Gradient Boosting (그래디언트 부스팅을 활용한 암호화폐 가격동향 예측)

  • Heo, Joo-Seong;Kwon, Do-Hyung;Kim, Ju-Bong;Han, Youn-Hee;An, Chae-Hun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.10
    • /
    • pp.387-396
    • /
    • 2018
  • Stock price prediction has been a difficult problem to solve. There have been many studies to predict stock price scientifically, but it is still impossible to predict the exact price. Recently, a variety of types of cryptocurrency has been developed, beginning with Bitcoin, which is technically implemented as the concept of distributed ledger. Various approaches have been attempted to predict the price of cryptocurrency. Especially, it is various from attempts to stock prediction techniques in traditional stock market, to attempts to apply deep learning and reinforcement learning. Since the market for cryptocurrency has many new features that are not present in the existing traditional stock market, there is a growing demand for new analytical techniques suitable for the cryptocurrency market. In this study, we first collect and process seven cryptocurrency price data through Bithumb's API. Then, we use the gradient boosting model, which is a data-driven learning based machine learning model, and let the model learn the price data change of cryptocurrency. We also find the most optimal model parameters in the verification step, and finally evaluate the prediction performance of the cryptocurrency price trends.

Optimal Sensor Location in Water Distribution Network using XGBoost Model (XGBoost 기반 상수도관망 센서 위치 최적화)

  • Hyewoon Jang;Donghwi Jung
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.217-217
    • /
    • 2023
  • 상수도관망은 사용자에게 고품질의 물을 안정적으로 공급하는 것을 목적으로 하며, 이를 평가하기 위한 지표 중 하나로 압력을 활용한다. 최근 스마트 센서의 설치가 확장됨에 따라 기계학습기법을 이용한 실시간 데이터 기반의 분석이 활발하다. 따라서 어디에서 데이터를 수집하느냐에 대한 센서 위치 결정이 중요하다. 본 연구는 eXtreme Gradient Boosting(XGBoost) 모델을 활용하여 대규모 상수도관망 내 센서 위치를 최적화하는 방법론을 제안한다. XGBoost 모델은 여러 의사결정 나무(decision tree)를 활용하는 앙상블(ensemble) 모델이며, 오차에 따른 가중치를 부여하여 성능을 향상시키는 부스팅(boosting) 방식을 이용한다. 이는 분산 및 병렬 처리가 가능해 메모리리소스를 최적으로 사용하고, 학습 속도가 빠르며 결측치에 대한 전처리 과정을 모델 내에 포함하고 있다는 장점이 있다. 모델 구현을 위한 독립 변수 결정을 위해 압력 데이터의 변동성 및 평균압력 값을 고려하여 상수도관망을 대표하는 중요 절점(critical node)를 선정한다. 중요 절점의 압력 값을 예측하는 XGBoost 모델을 구축하고 모델의 성능과 요인 중요도(feature importance) 값을 고려하여 센서의 최적 위치를 선정한다. 이러한 방법론을 기반으로 상수도관망의 특성에 따른 경향성을 파악하기 위해 다양한 형태(예를 들어, 망형, 가지형)와 구성 절점의 수를 변화시키며 결과를 분석한다. 본 연구에서 구축한 XGBoost 모델은 추가적인 전처리 과정을 최소화하며 대규모 관망에 간편하게 사용할 수 있어 추후 다양한 입출력 데이터의 조합을 통해 센서 위치 외에도 상수도관망에서의 성능 최적화에 활용할 수 있을 것으로 기대한다.

  • PDF