• 제목/요약/키워드: Multiple Machine Learning

검색결과 356건 처리시간 0.025초

머신러닝을 통한 잉크 필요량 예측 알고리즘 (Machine Learning Algorithm for Estimating Ink Usage)

  • 권세욱;현영주;태현철
    • 산업경영시스템학회지
    • /
    • 제46권1호
    • /
    • pp.23-31
    • /
    • 2023
  • Research and interest in sustainable printing are increasing in the packaging printing industry. Currently, predicting the amount of ink required for each work is based on the experience and intuition of field workers. Suppose the amount of ink produced is more than necessary. In this case, the rest of the ink cannot be reused and is discarded, adversely affecting the company's productivity and environment. Nowadays, machine learning models can be used to figure out this problem. This study compares the ink usage prediction machine learning models. A simple linear regression model, Multiple Regression Analysis, cannot reflect the nonlinear relationship between the variables required for packaging printing, so there is a limit to accurately predicting the amount of ink needed. This study has established various prediction models which are based on CART (Classification and Regression Tree), such as Decision Tree, Random Forest, Gradient Boosting Machine, and XGBoost. The accuracy of the models is determined by the K-fold cross-validation. Error metrics such as root mean squared error, mean absolute error, and R-squared are employed to evaluate estimation models' correctness. Among these models, XGBoost model has the highest prediction accuracy and can reduce 2134 (g) of wasted ink for each work. Thus, this study motivates machine learning's potential to help advance productivity and protect the environment.

천문학에서의 대용량 자료 분석 (Analysis of massive data in astronomy)

  • 신민수
    • 응용통계연구
    • /
    • 제29권6호
    • /
    • pp.1107-1116
    • /
    • 2016
  • 최근의 탐사 천문학 관측으로부터 대용량 관측 자료가 획득되면서, 기존의 일상적인 자료 분석 방법에 큰 변화가 있었다. 고전적인 통계적인 추론과 더불어 기계학습 방법들이, 자료의 표준화로부터 물리적인 모델을 추론하는 단계까지 자료 분석의 전 과정에서 활용되어 왔다. 적은 비용으로 대형 검출 기기들을 이용할 수 있게 되고, 더불어서 고속의 컴퓨터 네트워크를 통해서 대용량의 자료들을 쉽게 공유할 수 있게 되면서, 기존의 다양한 천문학 자료 분석의 문제들에 대해서 기계학습을 활용하는 것이 보편화되고 있다. 일반적으로 대용량 천문학 자료의 분석은, 자료의 시간과 공간 분포가 가지는 비 균질성 때문에 야기되는 효과를 고려해야 하는 문제를 가진다. 오늘날 증가하는 자료의 규모는 자연스럽게 기계학습의 활용과 더불어 병렬 분산 컴퓨팅을 필요로 하고 있다. 그러나 이러한 병렬 분산 분석 환경의 일반적인 자료 분석에서의 활용은 아직 활발하지 않은 상황이다. 천문학에서 기계학습을 사용하는데 있어서, 충분한 학습 자료를 관측을 통해 획득하는 것이 어렵고, 그래서 다양한 출처의 자료를 모아서 학습 자료를 수집해야 는 것이 일반적이다. 따라서 앞으로 준 지도학습이나 앙상블 학습과 같은 방법의 역할이 중요해 질 것으로 예상된다.

특허분석을 위한 빅 데이터학습 (A Big Data Learning for Patent Analysis)

  • 전성해
    • 한국지능시스템학회논문지
    • /
    • 제23권5호
    • /
    • pp.406-411
    • /
    • 2013
  • 빅 데이터는 여러 분야에서 다양한 개념으로 사용된다. 예를 들어, 컴퓨터학과 사회학에서 빅 데이터에 대한 접근방법에 차이가 있지만, 데이터분석 관점에서는 공통적인 부분을 갖는다. 즉, 공학이든 사회과학이든 빅 데이터에 대한 분석은 반드시 필요하다. 통계학과 기계학습은 빅 데이터의 분석을 위한 대표적인 분석도구이다. 본 논문에서는 빅 데이터분석을 위한 학습도구에 대하여 알아보고 검색된 빅 데이터 원천에서부터 분석을 거쳐 최종적으로 분석결과를 사용하는 전체과정에 대하여 효율적인 빅 데이터학습 절차에 대하여 제안한다. 특히, 대표적인 빅 데이터 구조를 갖고 있는 특허문서에 대하여 빅데이터학습을 적용하여 특허분석을 수행하고 이 결과를 기술예측에 적용하는 방법에 대하여 연구한다. 제안방법에 대한 실제적용을 위하여 전 세계 특허청으로부터 빅 데이터 관련 특허문서를 검색하여 텍스트 마이닝의 전처리와 통계학의 다중선형회귀분석을 이용한 구체적인 빅 데이터학습에 대한 사례연구를 수행하였다.

Enhancing Performance with a Learnable Strategy for Multiple Question Answering Modules

  • Oh, Hyo-Jung;Myaeng, Sung-Hyon;Jang, Myung-Gil
    • ETRI Journal
    • /
    • 제31권4호
    • /
    • pp.419-428
    • /
    • 2009
  • A question answering (QA) system can be built using multiple QA modules that can individually serve as a QA system in and of themselves. This paper proposes a learnable, strategy-driven QA model that aims at enhancing both efficiency and effectiveness. A strategy is learned using a learning-based classification algorithm that determines the sequence of QA modules to be invoked and decides when to stop invoking additional modules. The learned strategy invokes the most suitable QA module for a given question and attempts to verify the answer by consulting other modules until the level of confidence reaches a threshold. In our experiments, our strategy learning approach obtained improvement over a simple routing approach by 10.5% in effectiveness and 27.2% in efficiency.

RNN을 이용한 태양광 에너지 생산 예측 (Solar Energy Prediction using Environmental Data via Recurrent Neural Network)

  • 리아크 무사다르;변영철;이상준
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2019년도 추계학술발표대회
    • /
    • pp.1023-1025
    • /
    • 2019
  • Coal and Natural gas are two biggest contributors to a generation of energy throughout the world. Most of these resources create environmental pollution while making energy affecting the natural habitat. Many approaches have been proposed as alternatives to these sources. One of the leading alternatives is Solar Energy which is usually harnessed using solar farms. In artificial intelligence, the most researched area in recent times is machine learning. With machine learning, many tasks which were previously thought to be only humanly doable are done by machine. Neural networks have two major subtypes i.e. Convolutional neural networks (CNN) which are used primarily for classification and Recurrent neural networks which are utilized for time-series predictions. In this paper, we predict energy generated by solar fields and optimal angles for solar panels in these farms for the upcoming seven days using environmental and historical data. We experiment with multiple configurations of RNN using Vanilla and LSTM (Long Short-Term Memory) RNN. We are able to achieve RSME of 0.20739 using LSTMs.

Automatic Classification of Drone Images Using Deep Learning and SVM with Multiple Grid Sizes

  • Kim, Sun Woong;Kang, Min Soo;Song, Junyoung;Park, Wan Yong;Eo, Yang Dam;Pyeon, Mu Wook
    • 한국측량학회지
    • /
    • 제38권5호
    • /
    • pp.407-414
    • /
    • 2020
  • SVM (Support vector machine) analysis was performed after applying a deep learning technique based on an Inception-based model (GoogLeNet). The accuracy of automatic image classification was analyzed using an SVM with multiple virtual grid sizes. Six classes were selected from a standard land cover map. Cars were added as a separate item to increase the classification accuracy of roads. The virtual grid size was 2-5 m for natural areas, 5-10 m for traffic areas, and 10-15 m for building areas, based on the size of items and the resolution of input images. The results demonstrate that automatic classification accuracy can be increased by adopting an integrated approach that utilizes weighted virtual grid sizes for different classes.

다중 이벤트 센서 기반 스마트 홈에서 사람 행동 분류를 위한 효율적 의사결정평면 생성기법 (Efficient Hyperplane Generation Techniques for Human Activity Classification in Multiple-Event Sensors Based Smart Home)

  • 장준서;김보국;문창일;이도현;곽준호;박대진;정유수
    • 대한임베디드공학회논문지
    • /
    • 제14권5호
    • /
    • pp.277-286
    • /
    • 2019
  • In this paper, we propose an efficient hyperplane generation technique to classify human activity from combination of events and sequence information obtained from multiple-event sensors. By generating hyperplane efficiently, our machine learning algorithm classify with less memory and run time than the LSVM (Linear Support Vector Machine) for embedded system. Because the fact that light weight and high speed algorithm is one of the most critical issue in the IoT, the study can be applied to smart home to predict human activity and provide related services. Our approach is based on reducing numbers of hyperplanes and utilizing robust string comparing algorithm. The proposed method results in reduction of memory consumption compared to the conventional ML (Machine Learning) algorithms; 252 times to LSVM and 34,033 times to LSTM (Long Short-Term Memory), although accuracy is decreased slightly. Thus our method showed outstanding performance on accuracy per hyperplane; 240 times to LSVM and 30,520 times to LSTM. The binarized image is then divided into groups, where each groups are converted to binary number, in order to reduce the number of comparison done in runtime process. The binary numbers are then converted to string. The test data is evaluated by converting to string and measuring similarity between hyperplanes using Levenshtein algorithm, which is a robust dynamic string comparing algorithm. This technique reduces runtime and enables the proposed algorithm to become 27% faster than LSVM, and 90% faster than LSTM.

Estimation of Automatic Video Captioning in Real Applications using Machine Learning Techniques and Convolutional Neural Network

  • Vaishnavi, J;Narmatha, V
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.316-326
    • /
    • 2022
  • The prompt development in the field of video is the outbreak of online services which replaces the television media within a shorter period in gaining popularity. The online videos are encouraged more in use due to the captions displayed along with the scenes for better understandability. Not only entertainment media but other marketing companies and organizations are utilizing videos along with captions for their product promotions. The need for captions is enabled for its usage in many ways for hearing impaired and non-native people. Research is continued in an automatic display of the appropriate messages for the videos uploaded in shows, movies, educational videos, online classes, websites, etc. This paper focuses on two concerns namely the first part dealing with the machine learning method for preprocessing the videos into frames and resizing, the resized frames are classified into multiple actions after feature extraction. For the feature extraction statistical method, GLCM and Hu moments are used. The second part deals with the deep learning method where the CNN architecture is used to acquire the results. Finally both the results are compared to find the best accuracy where CNN proves to give top accuracy of 96.10% in classification.

Enhancing prediction accuracy of concrete compressive strength using stacking ensemble machine learning

  • Yunpeng Zhao;Dimitrios Goulias;Setare Saremi
    • Computers and Concrete
    • /
    • 제32권3호
    • /
    • pp.233-246
    • /
    • 2023
  • Accurate prediction of concrete compressive strength can minimize the need for extensive, time-consuming, and costly mixture optimization testing and analysis. This study attempts to enhance the prediction accuracy of compressive strength using stacking ensemble machine learning (ML) with feature engineering techniques. Seven alternative ML models of increasing complexity were implemented and compared, including linear regression, SVM, decision tree, multiple layer perceptron, random forest, Xgboost and Adaboost. To further improve the prediction accuracy, a ML pipeline was proposed in which the feature engineering technique was implemented, and a two-layer stacked model was developed. The k-fold cross-validation approach was employed to optimize model parameters and train the stacked model. The stacked model showed superior performance in predicting concrete compressive strength with a correlation of determination (R2) of 0.985. Feature (i.e., variable) importance was determined to demonstrate how useful the synthetic features are in prediction and provide better interpretability of the data and the model. The methodology in this study promotes a more thorough assessment of alternative ML algorithms and rather than focusing on any single ML model type for concrete compressive strength prediction.

PartitionTuner: An operator scheduler for deep-learning compilers supporting multiple heterogeneous processing units

  • Misun Yu;Yongin Kwon;Jemin Lee;Jeman Park;Junmo Park;Taeho Kim
    • ETRI Journal
    • /
    • 제45권2호
    • /
    • pp.318-328
    • /
    • 2023
  • Recently, embedded systems, such as mobile platforms, have multiple processing units that can operate in parallel, such as centralized processing units (CPUs) and neural processing units (NPUs). We can use deep-learning compilers to generate machine code optimized for these embedded systems from a deep neural network (DNN). However, the deep-learning compilers proposed so far generate codes that sequentially execute DNN operators on a single processing unit or parallel codes for graphic processing units (GPUs). In this study, we propose PartitionTuner, an operator scheduler for deep-learning compilers that supports multiple heterogeneous PUs including CPUs and NPUs. PartitionTuner can generate an operator-scheduling plan that uses all available PUs simultaneously to minimize overall DNN inference time. Operator scheduling is based on the analysis of DNN architecture and the performance profiles of individual and group operators measured on heterogeneous processing units. By the experiments for seven DNNs, PartitionTuner generates scheduling plans that perform 5.03% better than a static type-based operator-scheduling technique for SqueezeNet. In addition, PartitionTuner outperforms recent profiling-based operator-scheduling techniques for ResNet50, ResNet18, and SqueezeNet by 7.18%, 5.36%, and 2.73%, respectively.