• Title/Summary/Keyword: Multiple Machine Learning

Search Result 356, Processing Time 0.024 seconds

Multi-Sensor Signal based Situation Recognition with Bayesian Networks

  • Kim, Jin-Pyung;Jang, Gyu-Jin;Jung, Jae-Young;Kim, Moon-Hyun
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.3
    • /
    • pp.1051-1059
    • /
    • 2014
  • In this paper, we propose an intelligent situation recognition model by collecting and analyzing multiple sensor signals. Multiple sensor signals are collected for fixed time window. A training set of collected sensor data for each situation is provided to K2-learning algorithm to generate Bayesian networks representing causal relationship between sensors for the situation. Statistical characteristics of sensor values and topological characteristics of generated graphs are learned for each situation. A neural network is designed to classify the current situation based on the extracted features from collected multiple sensor values. The proposed method is implemented and tested with UCI machine learning repository data.

Application and Performance Analysis of Machine Learning for GPS Jamming Detection (GPS 재밍탐지를 위한 기계학습 적용 및 성능 분석)

  • Jeong, Inhwan
    • The Journal of Korean Institute of Information Technology
    • /
    • v.17 no.5
    • /
    • pp.47-55
    • /
    • 2019
  • As the damage caused by GPS jamming has been increased, researches for detecting and preventing GPS jamming is being actively studied. This paper deals with a GPS jamming detection method using multiple GPS receiving channels and three-types machine learning techniques. Proposed multiple GPS channels consist of commercial GPS receiver with no anti-jamming function, receiver with just anti-noise jamming function and receiver with anti-noise and anti-spoofing jamming function. This system enables user to identify the characteristics of the jamming signals by comparing the coordinates received at each receiver. In this paper, The five types of jamming signals with different signal characteristics were entered to the system and three kinds of machine learning methods(AB: Adaptive Boosting, SVM: Support Vector Machine, DT: Decision Tree) were applied to perform jamming detection test. The results showed that the DT technique has the best performance with a detection rate of 96.9% when the single machine learning technique was applied. And it is confirmed that DT technique is more effective for GPS jamming detection than the binary classifier techniques because it has low ambiguity and simple hardware. It was also confirmed that SVM could be used only if additional solutions to ambiguity problem are applied.

Hand Gesture Classification Using Multiple Doppler Radar and Machine Learning (다중 도플러 레이다와 머신러닝을 이용한 손동작 인식)

  • Baik, Kyung-Jin;Jang, Byung-Jun
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • This paper suggests a hand gesture recognition technology to control smart devices using multiple Doppler radars and a support vector machine(SVM), which is one of the machine learning algorithms. Whereas single Doppler radar can recognize only simple hand gestures, multiple Doppler radar can recognize various and complex hand gestures by using various Doppler patterns as a function of time and each device. In addition, machine learning technology can enhance recognition accuracy. In order to determine the feasibility of the suggested technology, we implemented a test-bed using two Doppler radars, NI DAQ USB-6008, and MATLAB. Using this test-bed, we can successfully classify four hand gestures, which are Push, Pull, Right Slide, and Left Slide. Applying SVM machine learning algorithm, it was confirmed the high accuracy of the hand gesture recognition.

Artificial intelligence, machine learning, and deep learning in women's health nursing

  • Jeong, Geum Hee
    • Women's Health Nursing
    • /
    • v.26 no.1
    • /
    • pp.5-9
    • /
    • 2020
  • Artificial intelligence (AI), which includes machine learning and deep learning has been introduced to nursing care in recent years. The present study reviews the following topics: the concepts of AI, machine learning, and deep learning; examples of AI-based nursing research; the necessity of education on AI in nursing schools; and the areas of nursing care where AI is useful. AI refers to an intelligent system consisting not of a human, but a machine. Machine learning refers to computers' ability to learn without being explicitly programmed. Deep learning is a subset of machine learning that uses artificial neural networks consisting of multiple hidden layers. It is suggested that the educational curriculum should include big data, the concept of AI, algorithms and models of machine learning, the model of deep learning, and coding practice. The standard curriculum should be organized by the nursing society. An example of an area of nursing care where AI is useful is prenatal nursing interventions based on pregnant women's nursing records and AI-based prediction of the risk of delivery according to pregnant women's age. Nurses should be able to cope with the rapidly developing environment of nursing care influenced by AI and should understand how to apply AI in their field. It is time for Korean nurses to take steps to become familiar with AI in their research, education, and practice.

Machine-Learning-Based Link Adaptation for Energy-Efficient MIMO-OFDM Systems (MIMO-OFDM 시스템에서 에너지 효율성을 위한 기계 학습 기반 적응형 전송 기술 및 Feature Space 연구)

  • Oh, Myeung Suk;Kim, Gibum;Park, Hyuncheol
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.27 no.5
    • /
    • pp.407-415
    • /
    • 2016
  • Recent wireless communication trends have emphasized the importance of energy-efficient transmission. In this paper, link adaptation with machine learning mechanism for maximum energy efficiency in multiple-input multiple-output orthogonal frequency division multiplexing(MIMO-OFDM) wireless system is considered. For reflecting frequency-selective MIMO-OFDM channels, two-dimensional capacity(2D-CAP) feature space is proposed. In addition, machine-learning-based bit and power adaptation(ML-BPA) algorithm that performs classification-based link adaptation is presented. Simulation results show that 2D-CAP feature space can represent channel conditions accurately and bring noticeable improvement in link adaptation performance. Compared with other feature spaces, including ordered postprocessing signal-to-noise ratio(ordSNR) feature space, 2D-CAP has distinguished advantages in either efficiency performance or computational complexity.

Multiple Classifier System for Activity Recognition

  • Han, Yong-Koo;Lee, Sung-Young;Lee, young-Koo;Lee, Jae-Won
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 2007.11a
    • /
    • pp.439-443
    • /
    • 2007
  • Nowadays, activity recognition becomes a hot topic in context-aware computing. In activity recognition, machine learning techniques have been widely applied to learn the activity models from labeled activity samples. Most of the existing work uses only one learning method for activity learning and is focused on how to effectively utilize the labeled samples by refining the learning method. However, not much attention has been paid to the use of multiple classifiers for boosting the learning performance. In this paper, we use two methods to generate multiple classifiers. In the first method, the basic learning algorithms for each classifier are the same, while the training data is different (ASTD). In the second method, the basic learning algorithms for each classifier are different, while the training data is the same (ADTS). Experimental results indicate that ADTS can effectively improve activity recognition performance, while ASTD cannot achieve any improvement of the performance. We believe that the classifiers in ADTS are more diverse than those in ASTD.

  • PDF

The Role of Data Technologies with Machine Learning Approaches in Makkah Religious Seasons

  • Waleed Al Shehri
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.26-32
    • /
    • 2023
  • Hajj is a fundamental pillar of Islam that all Muslims must perform at least once in their lives. However, Umrah can be performed several times yearly, depending on people's abilities. Every year, Muslims from all over the world travel to Saudi Arabia to perform Hajj. Hajj and Umrah pilgrims face multiple issues due to the large volume of people at the same time and place during the event. Therefore, a system is needed to facilitate the people's smooth execution of Hajj and Umrah procedures. Multiple devices are already installed in Makkah, but it would be better to suggest the data architectures with the help of machine learning approaches. The proposed system analyzes the services provided to the pilgrims regarding gender, location, and foreign pilgrims. The proposed system addressed the research problem of analyzing the Hajj pilgrim dataset most effectively. In addition, Visualizations of the proposed method showed the system's performance using data architectures. Machine learning algorithms classify whether male pilgrims are more significant than female pilgrims. Several algorithms were proposed to classify the data, including logistic regression, Naive Bayes, K-nearest neighbors, decision trees, random forests, and XGBoost. The decision tree accuracy value was 62.83%, whereas K-nearest Neighbors had 62.86%; other classifiers have lower accuracy than these. The open-source dataset was analyzed using different data architectures to store the data, and then machine learning approaches were used to classify the dataset.

Development of Medical Cost Prediction Model Based on the Machine Learning Algorithm (머신러닝 알고리즘 기반의 의료비 예측 모델 개발)

  • Han Bi KIM;Dong Hoon HAN
    • Journal of Korea Artificial Intelligence Association
    • /
    • v.1 no.1
    • /
    • pp.11-16
    • /
    • 2023
  • Accurate hospital case modeling and prediction are crucial for efficient healthcare. In this study, we demonstrate the implementation of regression analysis methods in machine learning systems utilizing mathematical statics and machine learning techniques. The developed machine learning model includes Bayesian linear, artificial neural network, decision tree, decision forest, and linear regression analysis models. Through the application of these algorithms, corresponding regression models were constructed and analyzed. The results suggest the potential of leveraging machine learning systems for medical research. The experiment aimed to create an Azure Machine Learning Studio tool for the speedy evaluation of multiple regression models. The tool faciliates the comparision of 5 types of regression models in a unified experiment and presents assessment results with performance metrics. Evaluation of regression machine learning models highlighted the advantages of boosted decision tree regression, and decision forest regression in hospital case prediction. These findings could lay the groundwork for the deliberate development of new directions in medical data processing and decision making. Furthermore, potential avenues for future research may include exploring methods such as clustering, classification, and anomaly detection in healthcare systems.

IRSML: An intelligent routing algorithm based on machine learning in software defined wireless networking

  • Duong, Thuy-Van T.;Binh, Le Huu
    • ETRI Journal
    • /
    • v.44 no.5
    • /
    • pp.733-745
    • /
    • 2022
  • In software-defined wireless networking (SDWN), the optimal routing technique is one of the effective solutions to improve its performance. This routing technique is done by many different methods, with the most common using integer linear programming problem (ILP), building optimal routing metrics. These methods often only focus on one routing objective, such as minimizing the packet blocking probability, minimizing end-to-end delay (EED), and maximizing network throughput. It is difficult to consider multiple objectives concurrently in a routing algorithm. In this paper, we investigate the application of machine learning to control routing in the SDWN. An intelligent routing algorithm is then proposed based on the machine learning to improve the network performance. The proposed algorithm can optimize multiple routing objectives. Our idea is to combine supervised learning (SL) and reinforcement learning (RL) methods to discover new routes. The SL is used to predict the performance metrics of the links, including EED quality of transmission (QoT), and packet blocking probability (PBP). The routing is done by the RL method. We use the Q-value in the fundamental equation of the RL to store the PBP, which is used for the aim of route selection. Concurrently, the learning rate coefficient is flexibly changed to determine the constraints of routing during learning. These constraints include QoT and EED. Our performance evaluations based on OMNeT++ have shown that the proposed algorithm has significantly improved the network performance in terms of the QoT, EED, packet delivery ratio, and network throughput compared with other well-known routing algorithms.

Machine Learning Applied to Uncovering Gene Regulation

  • Craven, Mark
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2000.11a
    • /
    • pp.61-68
    • /
    • 2000
  • Now that the complete genomes of numerous organisms have been ascertained, key problems in molecular biology include determining the functions of the genes in each organism, the relationships that exist among these genes, and the regulatory mechanisms that control their operation. These problems can be partially addressed by using machine learning methods to induce predictive models from available data. My group is applying and developing machine learning methods for several tasks that involve characterizing gene regulation. In one project, for example, we are using machine learning methods to identify transcriptional control elements such as promoters, terminators and operons. In another project, we are using learning methods to identify and characterize sets of genes that are affected by tumor promoters in mammals. Our approach to these tasks involves learning multiple models for inter-related tasks, and applying learning algorithms to rich and diverse data sources including sequence data, microarray data, and text from the scientific literature.

  • PDF