• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.028 seconds

Gradient Descent Approach for Value-Based Weighting (점진적 하강 방법을 이용한 속성값 기반의 가중치 계산방법)

  • Lee, Chang-Hwan;Bae, Joo-Hyun
    • The KIPS Transactions:PartB
    • /
    • v.17B no.5
    • /
    • pp.381-388
    • /
    • 2010
  • Naive Bayesian learning has been widely used in many data mining applications, and it performs surprisingly well on many applications. However, due to the assumption that all attributes are equally important in naive Bayesian learning, the posterior probabilities estimated by naive Bayesian are sometimes poor. In this paper, we propose more fine-grained weighting methods, called value weighting, in the context of naive Bayesian learning. While the current weighting methods assign a weight to each attribute, we assign a weight to each attribute value. We investigate how the proposed value weighting effects the performance of naive Bayesian learning. We develop new methods, using gradient descent method, for both value weighting and feature weighting in the context of naive Bayesian. The performance of the proposed methods has been compared with the attribute weighting method and general Naive bayesian, and the value weighting method showed better in most cases.

Intelligent Massive Traffic Handling Scheme in 5G Bottleneck Backhaul Networks

  • Tam, Prohim;Math, Sa;Kim, Seokhoon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.874-890
    • /
    • 2021
  • With the widespread deployment of the fifth-generation (5G) communication networks, various real-time applications are rapidly increasing and generating massive traffic on backhaul network environments. In this scenario, network congestion will occur when the communication and computation resources exceed the maximum available capacity, which severely degrades the network performance. To alleviate this problem, this paper proposed an intelligent resource allocation (IRA) to integrate with the extant resource adjustment (ERA) approach mainly based on the convergence of support vector machine (SVM) algorithm, software-defined networking (SDN), and mobile edge computing (MEC) paradigms. The proposed scheme acquires predictable schedules to adapt the downlink (DL) transmission towards off-peak hour intervals as a predominant priority. Accordingly, the peak hour bandwidth resources for serving real-time uplink (UL) transmission enlarge its capacity for a variety of mission-critical applications. Furthermore, to advance and boost gateway computation resources, MEC servers are implemented and integrated with the proposed scheme in this study. In the conclusive simulation results, the performance evaluation analyzes and compares the proposed scheme with the conventional approach over a variety of QoS metrics including network delay, jitter, packet drop ratio, packet delivery ratio, and throughput.

Prediction of Wave Breaking Using Machine Learning Open Source Platform (머신러닝 오픈소스 플랫폼을 활용한 쇄파 예측)

  • Lee, Kwang-Ho;Kim, Tag-Gyeom;Kim, Do-Sam
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.32 no.4
    • /
    • pp.262-272
    • /
    • 2020
  • A large number of studies on wave breaking have been carried out, and many experimental data have been documented. Moreover, on the basis of various experimental data set, many empirical or semi-empirical formulas based primarily on regression analysis have been proposed to quantitatively estimate wave breaking for engineering applications. However, wave breaking has an inherent variability, which imply that a linear statistical approach such as linear regression analysis might be inadequate. This study presents an alternative nonlinear method using an neural network, one of the machine learning methods, to estimate breaking wave height and breaking depth. The neural network is modeled using Tensorflow, a machine learning open source platform distributed by Google. The neural network is trained by randomly selecting the collected experimental data, and the trained neural network is evaluated using data not used for learning process. The results for wave breaking height and depth predicted by fully trained neural network are more accurate than those obtained by existing empirical formulas. These results show that neural network is an useful tool for the prediction of wave breaking.

Combining Multiple Classifiers for Automatic Classification of Email Documents (전자우편 문서의 자동분류를 위한 다중 분류기 결합)

  • Lee, Jae-Haeng;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.3
    • /
    • pp.192-201
    • /
    • 2002
  • Automated text classification is considered as an important method to manage and process a huge amount of documents in digital forms that are widespread and continuously increasing. Recently, text classification has been addressed with machine learning technologies such as k-nearest neighbor, decision tree, support vector machine and neural networks. However, only few investigations in text classification are studied on real problems but on well-organized text corpus, and do not show their usefulness. This paper proposes and analyzes text classification methods for a real application, email document classification task. First, we propose a combining method of multiple neural networks that improves the performance through the combinations with maximum and neural networks. Second, we present another strategy of combining multiple machine learning classifiers. Voting, Borda count and neural networks improve the overall classification performance. Experimental results show the usefulness of the proposed methods for a real application domain, yielding more than 90% precision rates.

Object-based Compression of Thermal Infrared Images for Machine Vision (머신 비전을 위한 열 적외선 영상의 객체 기반 압축 기법)

  • Lee, Yegi;Kim, Shin;Lim, Hanshin;Choo, Hyon-Gon;Cheong, Won-Sik;Seo, Jeongil;Yoon, Kyoungro
    • Journal of Broadcast Engineering
    • /
    • v.26 no.6
    • /
    • pp.738-747
    • /
    • 2021
  • Today, with the improvement of deep learning technology, computer vision areas such as image classification, object detection, object segmentation, and object tracking have shown remarkable improvements. Various applications such as intelligent surveillance, robots, Internet of Things, and autonomous vehicles in combination with deep learning technology are being applied to actual industries. Accordingly, the requirement of an efficient compression method for video data is necessary for machine consumption as well as for human consumption. In this paper, we propose an object-based compression of thermal infrared images for machine vision. The input image is divided into object and background parts based on the object detection results to achieve efficient image compression and high neural network performance. The separated images are encoded in different compression ratios. The experimental result shows that the proposed method has superior compression efficiency with a maximum BD-rate value of -19.83% to the whole image compression done with VVC.

Dynamic Positioning of Robot Soccer Simulation Game Agents using Reinforcement learning

  • Kwon, Ki-Duk;Cho, Soo-Sin;Kim, In-Cheol
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2001.01a
    • /
    • pp.59-64
    • /
    • 2001
  • The robot soccer simulation game is a dynamic multi-agent environment. In this paper we suggest a new reinforcement learning approach to each agent's dynamic positioning in such dynamic environment. Reinforcement learning is the machine learning in which an agent learns from indirect, delayed reward an optimal policy to chose sequences of actions that produce the greatest cumulative reward. Therefore the reinforcement learning is different from supervised learning in the sense that there is no presentation of input pairs as training examples. Furthermore, model-free reinforcement learning algorithms like Q-learning do not require defining or learning any models of the surrounding environment. Nevertheless it can learn the optimal policy if the agent can visit every state- action pair infinitely. However, the biggest problem of monolithic reinforcement learning is that its straightforward applications do not successfully scale up to more complex environments due to the intractable large space of states. In order to address this problem. we suggest Adaptive Mediation-based Modular Q-Learning (AMMQL)as an improvement of the existing Modular Q-Learning (MQL). While simple modular Q-learning combines the results from each learning module in a fixed way, AMMQL combines them in a more flexible way by assigning different weight to each module according to its contribution to rewards. Therefore in addition to resolving the problem of large state effectively, AMMQL can show higher adaptability to environmental changes than pure MQL. This paper introduces the concept of AMMQL and presents details of its application into dynamic positioning of robot soccer agents.

  • PDF

FPGA Design of SVM Classifier for Real Time Image Processing (실시간 영상처리를 위한 SVM 분류기의 FPGA 구현)

  • Na, Won-Seob;Han, Sung-Woo;Jeong, Yong-Jin
    • Journal of IKEEE
    • /
    • v.20 no.3
    • /
    • pp.209-219
    • /
    • 2016
  • SVM is a machine learning method used for image processing. It is well known for its high classification performance. We have to perform multiple MAC operations in order to use SVM for image classification. However, if the resolution of the target image or the number of classification cases increases, the execution time of SVM also increases, which makes it difficult to be performed in real-time applications. In this paper, we propose an hardware architecture which enables real-time applications using SVM classification. We used parallel architecture to simultaneously calculate MAC operations, and also designed the system for several feature extractors for compatibility. RBF kernel was used for hardware implemenation, and the exponent calculation formular included in the kernel was modified to enable fixed point modelling. Experimental results for the system, when implemented in Xilinx ZC-706 evaluation board, show that it can process 60.46 fps for $1360{\times}800$ resolution at 100MHz clock frequency.

OLE File Analysis and Malware Detection using Machine Learning

  • Choi, Hyeong Kyu;Kang, Ah Reum
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.149-156
    • /
    • 2022
  • Recently, there have been many reports of document-type malicious code injecting malicious code into Microsoft Office files. Document-type malicious code is often hidden by encoding the malicious code in the document. Therefore, document-type malware can easily bypass anti-virus programs. We found that malicious code was inserted into the Visual Basic for Applications (VBA) macro, a function supported by Microsoft Office. Malicious codes such as shellcodes that run external programs and URL-related codes that download files from external URLs were identified. We selected 354 keywords repeatedly appearing in malicious Microsoft Office files and defined the number of times each keyword appears in the body of the document as a feature. We performed machine learning with SVM, naïve Bayes, logistic regression, and random forest algorithms. As a result, each algorithm showed accuracies of 0.994, 0.659, 0.995, and 0.998, respectively.

Trends in Artificial Intelligence Applications in Clinical Trials: An analysis of ClinicalTrials.gov (임상시험에서 인공지능의 활용에 대한 분석 및 고찰: ClinicalTrials.gov 분석)

  • Jeong Min Go;Ji Yeon Lee;Yun-Kyoung Song;Jae Hyun Kim
    • Korean Journal of Clinical Pharmacy
    • /
    • v.34 no.2
    • /
    • pp.134-139
    • /
    • 2024
  • Background: Increasing numbers of studies and research about artificial intelligence (AI) and machine learning (ML) have led to their application in clinical trials. The purpose of this study is to analyze computer-based new technologies (AI/ML) applied on clinical trials registered on ClinicalTrials.gov to elucidate current usage of these technologies. Methods: As of March 1st, 2023, protocols listed on ClinicalTrials.gov that claimed to use AI/ML and included at least one of the following interventions-Drug, Biological, Dietary Supplement, or Combination Product-were selected. The selected protocols were classified according to their context of use: 1) drug discovery; 2) toxicity prediction; 3) enrichment; 4) risk stratification/management; 5) dose selection/optimization; 6) adherence; 7) synthetic control; 8) endpoint assessment; 9) postmarketing surveillance; and 10) drug selection. Results: The applications of AI/ML were explored in 131 clinical trial protocols. The areas where AI/ML was most frequently utilized in clinical trials included endpoint assessment (n=80), followed by dose selection/optimization (n=15), risk stratification/management (n=13), drug discovery (n=4), adherence (n=4), drug selection (n=1) and enrichment (n=1). Conclusion: The most frequent application of AI/ML in clinical trials is in the fields of endpoint assessment, where the utilization is primarily focuses on the diagnosis of disease by imaging or video analyses. The number of clinical trials using artificial intelligence will increase as the technology continues to develop rapidly, making it necessary for regulatory associates to establish proper regulations for these clinical trials.

A Comparative Study of Phishing Websites Classification Based on Classifier Ensemble

  • Tama, Bayu Adhi;Rhee, Kyung-Hyune
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.617-625
    • /
    • 2018
  • Phishing website has become a crucial concern in cyber security applications. It is performed by fraudulently deceiving users with the aim of obtaining their sensitive information such as bank account information, credit card, username, and password. The threat has led to huge losses to online retailers, e-business platform, financial institutions, and to name but a few. One way to build anti-phishing detection mechanism is to construct classification algorithm based on machine learning techniques. The objective of this paper is to compare different classifier ensemble approaches, i.e. random forest, rotation forest, gradient boosted machine, and extreme gradient boosting against single classifiers, i.e. decision tree, classification and regression tree, and credal decision tree in the case of website phishing. Area under ROC curve (AUC) is employed as a performance metric, whilst statistical tests are used as baseline indicator of significance evaluation among classifiers. The paper contributes the existing literature on making a benchmark of classifier ensembles for web phishing detection.