• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.029 seconds

A Method of Selecting Layered File System Based on Learning Block I/O History for Service-Customized Container (서비스 맞춤형 컨테이너를 위한 블록 입출력 히스토리 학습 기반 컨테이너 레이어 파일 시스템 선정 기법)

  • Yong, Chanho;Na, Sang-Ho;Lee, Pill-Woo;Huh, Eui-Nam
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.10
    • /
    • pp.415-420
    • /
    • 2017
  • Virtualization technique of OS-level is a new paradigm for deploying applications, and is attracting attention as a technology to replace traditional virtualization technique, VM (Virtual Machine). Especially, docker containers are capable of distributing application images faster and more efficient than before by applying layered image structures and union mount point to existing linux container. These characteristics of containers can only be used in layered file systems that support snapshot functionality, so it is required to select appropriate layered file systems according to the characteristics of the containerized application. We examine the characteristics of representative layered file systems and conduct write performance evaluations of each layered file systems according to the operating principles of the layered file system, Allocate-on-Demand and Copy-up. We also suggest the method of determining a appropriate layered file system principle for unknown containerized application by learning block I/O usage history of each layered file system principles in artificial neural network. Finally we validate effectiveness of artificial neural network created from block I/O history of each layered file system principles.

An Incremental Rule Extraction Algorithm Based on Recursive Partition Averaging (재귀적 분할 평균에 기반한 점진적 규칙 추출 알고리즘)

  • Han, Jin-Chul;Kim, Sang-Kwi;Yoon, Chung-Hwa
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.1
    • /
    • pp.11-17
    • /
    • 2007
  • One of the popular methods used for pattern classification is the MBR (Memory-Based Reasoning) algorithm. Since it simply computes distances between a test pattern and training patterns or hyperplanes stored in memory, and then assigns the class of the nearest training pattern, it cannot explain how the classification result is obtained. In order to overcome this problem, we propose an incremental teaming algorithm based on RPA (Recursive Partition Averaging) to extract IF-THEN rules that describe regularities inherent in training patterns. But rules generated by RPA eventually show an overfitting phenomenon, because they depend too strongly on the details of given training patterns. Also RPA produces more number of rules than necessary, due to over-partitioning of the pattern space. Consequently, we present the IREA (Incremental Rule Extraction Algorithm) that overcomes overfitting problem by removing useless conditions from rules and reduces the number of rules at the same time. We verify the performance of proposed algorithm using benchmark data sets from UCI Machine Learning Repository.

Machine learning based radar imaging algorithm for drone detection and classification (드론 탐지 및 분류를 위한 레이다 영상 기계학습 활용)

  • Moon, Min-Jung;Lee, Woo-Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.619-627
    • /
    • 2021
  • Recent advance in low cost and light-weight drones has extended their application areas in both military and private sectors. Accordingly surveillance program against unfriendly drones has become an important issue. Drone detection and classification technique has long been emphasized in order to prevent attacks or accidents by commercial drones in urban areas. Most commercial drones have small sizes and low reflection and hence typical sensors that use acoustic, infrared, or radar signals exhibit limited performances. Recently, artificial intelligence algorithm has been actively exploited to enhance radar image identification performance. In this paper, we adopt machined learning algorithm for high resolution radar imaging in drone detection and classification applications. For this purpose, simulation is carried out against commercial drone models and compared with experimental data obtained through high resolution radar field test.

Optimization of Transitive Verb-Objective Collocation Dictionary based on k-nearest Neighbor Learning (k-최근점 학습에 기반한 타동사-목적어 연어 사전의 최적화)

  • Kim, Yu-Seop;Zhang, Byoung-Tak;Kim, Yung-Taek
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.3
    • /
    • pp.302-313
    • /
    • 2000
  • In English-Korean machine translation, transitive verb-objective collocation is utilized for accurate translation of an English verbal phrase into Korean. This paper presents an algorithm for correct verb translation based on the k-nearest neighbor learning. The semantic distance is defined on the WordNet for the k-nearest neighbor learning. And we also present algorithms for automatic collocation dictionary optimization. The algorithms extract transitive verb-objective pairs as training examples from large corpora and minimize the examples, considering the tradeoff between translation accuracy and example size. Experiments show that these algorithms optimized collocation dictionary keeping about 90% accuracy for a verb 'build'.

  • PDF

Design of Gas Classifier Based On Artificial Neural Network (인공신경망 기반 가스 분류기의 설계)

  • Jeong, Woojae;Kim, Minwoo;Cho, Jaechan;Jung, Yunho
    • Journal of IKEEE
    • /
    • v.22 no.3
    • /
    • pp.700-705
    • /
    • 2018
  • In this paper, we propose the gas classifier based on restricted column energy neural network (RCE-NN) and present its hardware implementation results for real-time learning and classification. Since RCE-NN has a flexible network architecture with real-time learning process, it is suitable for gas classification applications. The proposed gas classifier showed 99.2% classification accuracy for the UCI gas dataset and was implemented with 26,702 logic elements with Intel-Altera cyclone IV FPGA. In addition, it was verified with FPGA test system at an operating frequency of 63MHz.

Scene Text Recognition Performance Improvement through an Add-on of an OCR based Classifier (OCR 엔진 기반 분류기 애드온 결합을 통한 이미지 내부 텍스트 인식 성능 향상)

  • Chae, Ho-Yeol;Seok, Ho-Sik
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1086-1092
    • /
    • 2020
  • An autonomous agent for real world should be able to recognize text in scenes. With the advancement of deep learning, various DNN models have been utilized for transformation, feature extraction, and predictions. However, the existing state-of-the art STR (Scene Text Recognition) engines do not achieve the performance required for real world applications. In this paper, we introduce a performance-improvement method through an add-on composed of an OCR (Optical Character Recognition) engine and a classifier for STR engines. On instances from IC13 and IC15 datasets which a STR engine failed to recognize, our method recognizes 10.92% of unrecognized characters.

IoB Based Scenario Application of Health and Medical AI Platform (보건의료 AI 플랫폼의 IoB 기반 시나리오 적용)

  • Eun-Suab, Lim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.6
    • /
    • pp.1283-1292
    • /
    • 2022
  • At present, several artificial intelligence projects in the healthcare and medical field are competing with each other, and the interfaces between the systems lack unified specifications. Thus, this study presents an artificial intelligence platform for healthcare and medical fields which adopts the deep learning technology to provide algorithms, models and service support for the health and medical enterprise applications. The suggested platform can provide a large number of heterogeneous data processing, intelligent services, model managements, typical application scenarios, and other services for different types of business. In connection with the suggested platform application, we represents a medical service which is corresponding to the trusted and comprehensible tracking and analyzing patient behavior system for Health and Medical treatment using Internet of Behavior concept.

Hybrid Tensor Flow DNN and Modified Residual Network Approach for Cyber Security Threats Detection in Internet of Things

  • Alshehri, Abdulrahman Mohammed;Fenais, Mohammed Saeed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.237-245
    • /
    • 2022
  • The prominence of IoTs (Internet of Things) and exponential advancement of computer networks has resulted in massive essential applications. Recognizing various cyber-attacks or anomalies in networks and establishing effective intrusion recognition systems are becoming increasingly vital to current security. MLTs (Machine Learning Techniques) can be developed for such data-driven intelligent recognition systems. Researchers have employed a TFDNNs (Tensor Flow Deep Neural Networks) and DCNNs (Deep Convolution Neural Networks) to recognize pirated software and malwares efficiently. However, tuning the amount of neurons in multiple layers with activation functions leads to learning error rates, degrading classifier's reliability. HTFDNNs ( Hybrid tensor flow DNNs) and MRNs (Modified Residual Networks) or Resnet CNNs were presented to recognize software piracy and malwares. This study proposes HTFDNNs to identify stolen software starting with plagiarized source codes. This work uses Tokens and weights for filtering noises while focusing on token's for identifying source code thefts. DLTs (Deep learning techniques) are then used to detect plagiarized sources. Data from Google Code Jam is used for finding software piracy. MRNs visualize colour images for identifying harms in networks using IoTs. Malware samples of Maling dataset is used for tests in this work.

Prognostication of Hepatocellular Carcinoma Using Artificial Intelligence

  • Subin Heo;Hyo Jung Park;Seung Soo Lee
    • Korean Journal of Radiology
    • /
    • v.25 no.6
    • /
    • pp.550-558
    • /
    • 2024
  • Hepatocellular carcinoma (HCC) is a biologically heterogeneous tumor characterized by varying degrees of aggressiveness. The current treatment strategy for HCC is predominantly determined by the overall tumor burden, and does not address the diverse prognoses of patients with HCC owing to its heterogeneity. Therefore, the prognostication of HCC using imaging data is crucial for optimizing patient management. Although some radiologic features have been demonstrated to be indicative of the biologic behavior of HCC, traditional radiologic methods for HCC prognostication are based on visually-assessed prognostic findings, and are limited by subjectivity and inter-observer variability. Consequently, artificial intelligence has emerged as a promising method for image-based prognostication of HCC. Unlike traditional radiologic image analysis, artificial intelligence based on radiomics or deep learning utilizes numerous image-derived quantitative features, potentially offering an objective, detailed, and comprehensive analysis of the tumor phenotypes. Artificial intelligence, particularly radiomics has displayed potential in a variety of applications, including the prediction of microvascular invasion, recurrence risk after locoregional treatment, and response to systemic therapy. This review highlights the potential value of artificial intelligence in the prognostication of HCC as well as its limitations and future prospects.

An ANN-based gesture recognition algorithm for smart-home applications

  • Huu, Phat Nguyen;Minh, Quang Tran;The, Hoang Lai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.5
    • /
    • pp.1967-1983
    • /
    • 2020
  • The goal of this paper is to analyze and build an algorithm to recognize hand gestures applying to smart home applications. The proposed algorithm uses image processing techniques combing with artificial neural network (ANN) approaches to help users interact with computers by common gestures. We use five types of gestures, namely those for Stop, Forward, Backward, Turn Left, and Turn Right. Users will control devices through a camera connected to computers. The algorithm will analyze gestures and take actions to perform appropriate action according to users requests via their gestures. The results show that the average accuracy of proposal algorithm is 92.6 percent for images and more than 91 percent for video, which both satisfy performance requirements for real-world application, specifically for smart home services. The processing time is approximately 0.098 second with 10 frames/sec datasets. However, accuracy rate still depends on the number of training images (video) and their resolution.