• Title/Summary/Keyword: Multiple Machine Learning

Search Result 361, Processing Time 0.03 seconds

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

Compromising Multiple Objectives in Production Scheduling: A Data Mining Approach

  • Hwang, Wook-Yeon;Lee, Jong-Seok
    • Management Science and Financial Engineering
    • /
    • v.20 no.1
    • /
    • pp.1-9
    • /
    • 2014
  • In multi-objective scheduling problems, the objectives are usually in conflict. To obtain a satisfactory compromise and resolve the issue of NP-hardness, most existing works have suggested employing meta-heuristic methods, such as genetic algorithms. In this research, we propose a novel data-driven approach for generating a single solution that compromises multiple rules pursuing different objectives. The proposed method uses a data mining technique, namely, random forests, in order to extract the logics of several historic schedules and aggregate those. Since it involves learning predictive models, future schedules with the same previous objectives can be easily and quickly obtained by applying new production data into the models. The proposed approach is illustrated with a simulation study, where it appears to successfully produce a new solution showing balanced scheduling performances.

Sparse Data Cleaning using Multiple Imputations

  • Jun, Sung-Hae;Lee, Seung-Joo;Oh, Kyung-Whan
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.119-124
    • /
    • 2004
  • Real data as web log file tend to be incomplete. But we have to find useful knowledge from these for optimal decision. In web log data, many useful things which are hyperlink information and web usages of connected users may be found. The size of web data is too huge to use for effective knowledge discovery. To make matters worse, they are very sparse. We overcome this sparse problem using Markov Chain Monte Carlo method as multiple imputations. This missing value imputation changes spare web data to complete. Our study may be a useful tool for discovering knowledge from data set with sparseness. The more sparseness of data in increased, the better performance of MCMC imputation is good. We verified our work by experiments using UCI machine learning repository data.

Semi-supervised Multi-view Manifold Discriminant Intact Space Learning

  • Han, Lu;Wu, Fei;Jing, Xiao-Yuan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4317-4335
    • /
    • 2018
  • Semi-supervised multi-view latent space learning is gaining considerable popularity recently in many machine learning applications due to the high cost and difficulty to obtain the large amount of label information of data. Although some semi-supervised multi-view latent space learning methods have been presented, there is still much space for improvement: 1) How to learn latent discriminant intact feature representations by employing data of multiple views; 2) How to exploit the manifold structure of both labeled and unlabeled point in the learned latent intact space effectively. To address the above issues, we propose an approach called semi-supervised multi-view manifold discriminant intact space learning ($SM^2DIS$) for image classification in this paper. $SM^2DIS$ aims to seek a manifold discriminant intact space for data of different views by making use of both the discriminant information of labeled data and the manifold structure of both labeled and unlabeled data. Experimental results on MNIST, COIL-20, Multi-PIE, and Caltech-101 databases demonstrate the effectiveness and robustness of our proposed approach.

Extrapolation of wind pressure for low-rise buildings at different scales using few-shot learning

  • Yanmo Weng;Stephanie G. Paal
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.367-377
    • /
    • 2023
  • This study proposes a few-shot learning model for extrapolating the wind pressure of scaled experiments to full-scale measurements. The proposed ML model can use scaled experimental data and a few full-scale tests to accurately predict the remaining full-scale data points (for new specimens). This model focuses on extrapolating the prediction to different scales while existing approaches are not capable of accurately extrapolating from scaled data to full-scale data in the wind engineering domain. Also, the scaling issue observed in wind tunnel tests can be partially resolved via the proposed approach. The proposed model obtained a low mean-squared error and a high coefficient of determination for the mean and standard deviation wind pressure coefficients of the full-scale dataset. A parametric study is carried out to investigate the influence of the number of selected shots. This technique is the first of its kind as it is the first time an ML model has been used in the wind engineering field to deal with extrapolation in wind performance prediction. With the advantages of the few-shot learning model, physical wind tunnel experiments can be reduced to a great extent. The few-shot learning model yields a robust, efficient, and accurate alternative to extrapolating the prediction performance of structures from various model scales to full-scale.

Win/Lose Prediction System : Predicting Baseball Game Results using a Hybrid Machine Learning Model (혼합형 기계 학습 모델을 이용한 프로야구 승패 예측 시스템)

  • 홍석미;정경숙;정태충
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.6
    • /
    • pp.693-698
    • /
    • 2003
  • Every baseball game generates various records and on the basis of those records, win/lose prediction about the next game is carried out. Researches on win/lose predictions of professional baseball games have been carried out, but there are not so good results yet. Win/lose prediction is very difficult because the choice of features on win/lose predictions among many records is difficult and because the complexity of a learning model is increased due to overlapping factors among the data used in prediction. In this paper, learning features were chosen by opinions of baseball experts and a heuristic function was formed using the chosen features. We propose a hybrid model by creating a new value which can affect predictions by combining multiple features, and thus reducing a dimension of input value which will be used for backpropagation learning algorithm. As the experimental results show, the complexity of backpropagation was reduced and the accuracy of win/lose predictions on professional baseball games was improved.

Neural-network based Computerized Emotion Analysis using Multiple Biological Signals (다중 생체신호를 이용한 신경망 기반 전산화 감정해석)

  • Lee, Jee-Eun;Kim, Byeong-Nam;Yoo, Sun-Kook
    • Science of Emotion and Sensibility
    • /
    • v.20 no.2
    • /
    • pp.161-170
    • /
    • 2017
  • Emotion affects many parts of human life such as learning ability, behavior and judgment. It is important to understand human nature. Emotion can only be inferred from facial expressions or gestures, what it actually is. In particular, emotion is difficult to classify not only because individuals feel differently about emotion but also because visually induced emotion does not sustain during whole testing period. To solve the problem, we acquired bio-signals and extracted features from those signals, which offer objective information about emotion stimulus. The emotion pattern classifier was composed of unsupervised learning algorithm with hidden nodes and feature vectors. Restricted Boltzmann machine (RBM) based on probability estimation was used in the unsupervised learning and maps emotion features to transformed dimensions. The emotion was characterized by non-linear classifiers with hidden nodes of a multi layer neural network, named deep belief network (DBN). The accuracy of DBN (about 94 %) was better than that of back-propagation neural network (about 40 %). The DBN showed good performance as the emotion pattern classifier.

The World as Seen from Venice (1205-1533) as a Case Study of Scalable Web-Based Automatic Narratives for Interactive Global Histories

  • NANETTI, Andrea;CHEONG, Siew Ann
    • Asian review of World Histories
    • /
    • v.4 no.1
    • /
    • pp.3-34
    • /
    • 2016
  • This introduction is both a statement of a research problem and an account of the first research results for its solution. As more historical databases come online and overlap in coverage, we need to discuss the two main issues that prevent 'big' results from emerging so far. Firstly, historical data are seen by computer science people as unstructured, that is, historical records cannot be easily decomposed into unambiguous fields, like in population (birth and death records) and taxation data. Secondly, machine-learning tools developed for structured data cannot be applied as they are for historical research. We propose a complex network, narrative-driven approach to mining historical databases. In such a time-integrated network obtained by overlaying records from historical databases, the nodes are actors, while thelinks are actions. In the case study that we present (the world as seen from Venice, 1205-1533), the actors are governments, while the actions are limited to war, trade, and treaty to keep the case study tractable. We then identify key periods, key events, and hence key actors, key locations through a time-resolved examination of the actions. This tool allows historians to deal with historical data issues (e.g., source provenance identification, event validation, trade-conflict-diplomacy relationships, etc.). On a higher level, this automatic extraction of key narratives from a historical database allows historians to formulate hypotheses on the courses of history, and also allow them to test these hypotheses in other actions or in additional data sets. Our vision is that this narrative-driven analysis of historical data can lead to the development of multiple scale agent-based models, which can be simulated on a computer to generate ensembles of counterfactual histories that would deepen our understanding of how our actual history developed the way it did. The generation of such narratives, automatically and in a scalable way, will revolutionize the practice of history as a discipline, because historical knowledge, that is the treasure of human experiences (i.e. the heritage of the world), will become what might be inherited by machine learning algorithms and used in smart cities to highlight and explain present ties and illustrate potential future scenarios and visionarios.

Applications of Machine Learning Models for the Estimation of Reservoir CO2 Emissions (저수지 CO2 배출량 산정을 위한 기계학습 모델의 적용)

  • Yoo, Jisu;Chung, Se-Woong;Park, Hyung-Seok
    • Journal of Korean Society on Water Environment
    • /
    • v.33 no.3
    • /
    • pp.326-333
    • /
    • 2017
  • The lakes and reservoirs have been reported as important sources of carbon emissions to the atmosphere in many countries. Although field experiments and theoretical investigations based on the fundamental gas exchange theory have proposed the quantitative amounts of Net Atmospheric Flux (NAF) in various climate regions, there are still large uncertainties at the global scale estimation. Mechanistic models can be used for understanding and estimating the temporal and spatial variations of the NAFs considering complicated hydrodynamic and biogeochemical processes in a reservoir, but these models require extensive and expensive datasets and model parameters. On the other hand, data driven machine learning (ML) algorithms are likely to be alternative tools to estimate the NAFs in responding to independent environmental variables. The objective of this study was to develop random forest (RF) and multi-layer artificial neural network (ANN) models for the estimation of the daily $CO_2$ NAFs in Daecheong Reservoir located in Geum River of Korea, and compare the models performance against the multiple linear regression (MLR) model that proposed in the previous study (Chung et al., 2016). As a result, the RF and ANN models showed much enhanced performance in the estimation of the high NAF values, while MLR model significantly under estimated them. Across validation with 10-fold random samplings was applied to evaluate the performance of three models, and indicated that the ANN model is best, and followed by RF and MLR models.

Development of Artificial Intelligence Education Contents based on TensorFlow for Reinforcement of SW Convergence Gifted Teacher Competency (SW융합영재 담당교원 역량 강화를 위한 텐서플로우 기반 인공지능 교육 콘텐츠 개발)

  • Jang, Eunsill;Kim, Jaehyoun
    • Journal of Internet Computing and Services
    • /
    • v.20 no.6
    • /
    • pp.167-177
    • /
    • 2019
  • The enhancement of national competitiveness in future society is the discovery and training of excellent SW convergence gifted. In order to cultivate these SW convergence gifted, reinforcing competence of teachers in charge should be made first. Therefore, in this paper, artificial intelligence education contents, one of the core technologies of the 4th Industrial Revolution era, were developed to reinforcing competence of SW convergence gifted teachers. After setting the direction of artificial intelligence education content, we constructed educational content suitable for secondary SW convergence gifted education, and designed and developed it in detail. The composition of artificial intelligence education content consists of machine learning and tensor flow understanding, linear regression machine learning implementation for numerical prediction, and multiple linear regression-based price prediction machine learning implementations. The developed educational contents were verified by experts with qualitative aspects. In the future, we expect that the educational content of artificial intelligence proposed in this paper will be useful for strengthening the ability of SW convergence gifted teachers.