• Title/Summary/Keyword: MachineLearning

Search Result 5,654, Processing Time 0.032 seconds

The Present and Perspective of Quantum Machine Learning (양자 기계학습 기술의 현황 및 전망)

  • Chung, Wonzoo;Lee, Seong-Whan
    • Journal of KIISE
    • /
    • v.43 no.7
    • /
    • pp.751-762
    • /
    • 2016
  • This paper presents an overview of the emerging field of quantum machine learning which promises an innovative expedited performance of current classical machine learning algorithms by applying quantum theory. The approaches and technical details of recently developed quantum machine learning algorithms that have been able to substantially accelerate existing classical machine learning algorithms are presented. In addition, the quantum annealing algorithm behind the first commercial quantum computer is also discussed.

Trend of Utilization of Machine Learning Technology for Digital Healthcare Data Analysis (디지털 헬스케어 데이터 분석을 위한 머신 러닝 기술 활용 동향)

  • Woo, Y.C.;Lee, S.Y.;Choi, W.;Ahn, C.W.;Baek, O.K.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.1
    • /
    • pp.98-110
    • /
    • 2019
  • Machine learning has been applied to medical imaging and has shown an excellent recognition rate. Recently, there has been much interest in preventive medicine. If data are accessible, machine learning packages can be used easily in digital healthcare fields. However, it is necessary to prepare the data in advance, and model evaluation and tuning are required to construct a reliable model. On average, these processes take more than 80% of the total effort required. In this study, we describe the basic concepts of machine learning, pre-processing and visualization of datasets, feature engineering for reliable models, model evaluation and tuning, and the latest trends in popular machine learning frameworks. Finally, we survey a explainable machine learning analysis tool and will discuss the future direction of machine learning.

Learning of Adaptive Behavior of artificial Ant Using Classifier System (분류자 시스템을 이용한 인공개미의 적응행동의 학습)

  • 정치선;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.361-367
    • /
    • 1998
  • The main two applications of the Genetic Algorithms(GA) are the optimization and the machine learning. Machine Learning has two objectives that make the complex system learn its environment and produce the proper output of a system. The machine learning using the Genetic Algorithms is called GA machine learning or genetic-based machine learning (GBML). The machine learning is different from the optimization problems in finding the rule set. In optimization problems, the population of GA should converge into the best individual because optimization problems, the population of GA should converge into the best individual because their objective is the production of the individual near the optimal solution. On the contrary, the machine learning systems need to find the set of cooperative rules. There are two methods in GBML, Michigan method and Pittsburgh method. The former is that each rule is expressed with a string, the latter is that the set of rules is coded into a string. Th classifier system of Holland is the representative model of the Michigan method. The classifier systems arrange the strength of classifiers of classifier list using the message list. In this method, the real time process and on-line learning is possible because a set of rule is adjusted on-line. A classifier system has three major components: Performance system, apportionment of credit system, rule discovery system. In this paper, we solve the food search problem with the learning and evolution of an artificial ant using the learning classifier system.

  • PDF

Machine Learning and Deep Learning Models to Predict Income and Employment with Busan's Strategic Industry and Export (머신러닝과 딥러닝 기법을 이용한 부산 전략산업과 수출에 의한 고용과 소득 예측)

  • Chae-Deug Yi
    • Korea Trade Review
    • /
    • v.46 no.1
    • /
    • pp.169-187
    • /
    • 2021
  • This paper analyzes the feasibility of using machine learning and deep learning methods to forecast the income and employment using the strategic industries as well as investment, export, and exchange rates. The decision tree, artificial neural network, support vector machine, and deep learning models were used to forecast the income and employment in Busan. The following were the main findings of the comparison of their predictive abilities. First, the decision tree models predict the income and employment well. The forecasting values for the income and employment appeared somewhat differently according to the depth of decision trees and several conditions of strategic industries as well as investment, export, and exchange rates. Second, since the artificial neural network models show that the coefficients are somewhat low and RMSE are somewhat high, these models are not good forecasting the income and employment. Third, the support vector machine models show the high predictive power with the high coefficients of determination and low RMSE. Fourth, the deep neural network models show the higher predictive power with appropriate epochs and batch sizes. Thus, since the machine learning and deep learning models can predict the employment well, we need to adopt the machine learning and deep learning models to forecast the income and employment.

Distributed In-Memory Caching Method for ML Workload in Kubernetes (쿠버네티스에서 ML 워크로드를 위한 분산 인-메모리 캐싱 방법)

  • Dong-Hyeon Youn;Seokil Song
    • Journal of Platform Technology
    • /
    • v.11 no.4
    • /
    • pp.71-79
    • /
    • 2023
  • In this paper, we analyze the characteristics of machine learning workloads and, based on them, propose a distributed in-memory caching technique to improve the performance of machine learning workloads. The core of machine learning workload is model training, and model training is a computationally intensive task. Performing machine learning workloads in a Kubernetes-based cloud environment in which the computing framework and storage are separated can effectively allocate resources, but delays can occur because IO must be performed through network communication. In this paper, we propose a distributed in-memory caching technique to improve the performance of machine learning workloads performed in such an environment. In particular, we propose a new method of precaching data required for machine learning workloads into the distributed in-memory cache by considering Kubflow pipelines, a Kubernetes-based machine learning pipeline management tool.

  • PDF

Development of Medical Cost Prediction Model Based on the Machine Learning Algorithm (머신러닝 알고리즘 기반의 의료비 예측 모델 개발)

  • Han Bi KIM;Dong Hoon HAN
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • Accurate hospital case modeling and prediction are crucial for efficient healthcare. In this study, we demonstrate the implementation of regression analysis methods in machine learning systems utilizing mathematical statics and machine learning techniques. The developed machine learning model includes Bayesian linear, artificial neural network, decision tree, decision forest, and linear regression analysis models. Through the application of these algorithms, corresponding regression models were constructed and analyzed. The results suggest the potential of leveraging machine learning systems for medical research. The experiment aimed to create an Azure Machine Learning Studio tool for the speedy evaluation of multiple regression models. The tool faciliates the comparision of 5 types of regression models in a unified experiment and presents assessment results with performance metrics. Evaluation of regression machine learning models highlighted the advantages of boosted decision tree regression, and decision forest regression in hospital case prediction. These findings could lay the groundwork for the deliberate development of new directions in medical data processing and decision making. Furthermore, potential avenues for future research may include exploring methods such as clustering, classification, and anomaly detection in healthcare systems.

Feasibility Study of Google's Teachable Machine in Diagnosis of Tooth-Marked Tongue

  • Jeong, Hyunja
    • Journal of dental hygiene science
    • /
    • v.20 no.4
    • /
    • pp.206-212
    • /
    • 2020
  • Background: A Teachable Machine is a kind of machine learning web-based tool for general persons. In this paper, the feasibility of Google's Teachable Machine (ver. 2.0) was studied in the diagnosis of the tooth-marked tongue. Methods: For machine learning of tooth-marked tongue diagnosis, a total of 1,250 tongue images were used on Kaggle's web site. Ninety percent of the images were used for the training data set, and the remaining 10% were used for the test data set. Using Google's Teachable Machine (ver. 2.0), machine learning was performed using separated images. To optimize the machine learning parameters, I measured the diagnosis accuracies according to the value of epoch, batch size, and learning rate. After hyper-parameter tuning, the ROC (receiver operating characteristic) analysis method determined the sensitivity (true positive rate, TPR) and specificity (false positive rate, FPR) of the machine learning model to diagnose the tooth-marked tongue. Results: To evaluate the usefulness of the Teachable Machine in clinical application, I used 634 tooth-marked tongue images and 491 no-marked tongue images for machine learning. When the epoch, batch size, and learning rate as hyper-parameters were 75, 0.0001, and 128, respectively, the accuracy of the tooth-marked tongue's diagnosis was best. The accuracies for the tooth-marked tongue and the no-marked tongue were 92.1% and 72.6%, respectively. And, the sensitivity (TPR) and specificity (FPR) were 0.92 and 0.28, respectively. Conclusion: These results are more accurate than Li's experimental results calculated with convolution neural network. Google's Teachable Machines show good performance by hyper-parameters tuning in the diagnosis of the tooth-marked tongue. We confirmed that the tool is useful for several clinical applications.

A Prediction of Work-life Balance Using Machine Learning

  • Youngkeun Choi
    • Asia pacific journal of information systems
    • /
    • v.34 no.1
    • /
    • pp.209-225
    • /
    • 2024
  • This research aims to use machine learning technology in human resource management to predict employees' work-life balance. The study utilized a dataset from IBM Watson Analytics in the IBM Community for the machine learning analysis. Multinomial dependent variables concerning workers' work-life balance were examined, categorized into continuous and categorical types using the Generalized Linear Model. The complexity of assessing variable roles and their varied impact based on the type of model used was highlighted. The study's outcomes are academically and practically relevant, showcasing how machine learning can offer further understanding of psychological variables like work-life balance through analyzing employee profiles.

Research Trends in Wi-Fi Performance Improvement in Coexistence Networks with Machine Learning (기계학습을 활용한 이종망에서의 Wi-Fi 성능 개선 연구 동향 분석)

  • Kang, Young-myoung
    • Journal of Platform Technology
    • /
    • v.10 no.3
    • /
    • pp.51-59
    • /
    • 2022
  • Machine learning, which has recently innovatively developed, has become an important technology that can solve various optimization problems. In this paper, we introduce the latest research papers that solve the problem of channel sharing in heterogeneous networks using machine learning, analyze the characteristics of mainstream approaches, and present a guide to future research directions. Existing studies have generally adopted Q-learning since it supports fast learning both on online and offline environment. On the contrary, conventional studies have either not considered various coexistence scenarios or lacked consideration for the location of machine learning controllers that can have a significant impact on network performance. One of the powerful ways to overcome these disadvantages is to selectively use a machine learning algorithm according to changes in network environment based on the logical network architecture for machine learning proposed by ITU.

A study on the standardization strategy for building of learning data set for machine learning applications (기계학습 활용을 위한 학습 데이터세트 구축 표준화 방안에 관한 연구)

  • Choi, JungYul
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.205-212
    • /
    • 2018
  • With the development of high performance CPU / GPU, artificial intelligence algorithms such as deep neural networks, and a large amount of data, machine learning has been extended to various applications. In particular, a large amount of data collected from the Internet of Things, social network services, web pages, and public data is accelerating the use of machine learning. Learning data sets for machine learning exist in various formats according to application fields and data types, and thus it is difficult to effectively process data and apply them to machine learning. Therefore, this paper studied a method for building a learning data set for machine learning in accordance with standardized procedures. This paper first analyzes the requirement of learning data set according to problem types and data types. Based on the analysis, this paper presents the reference model to build learning data set for machine learning applications. This paper presents the target standardization organization and a standard development strategy for building learning data set.