• Title/Summary/Keyword: machine learning applications

Search Result 538, Processing Time 0.025 seconds

Implementation of Educational Brain Motion Controller for Machine Learning Applications

  • Park, Myeong-Chul;Choi, Duk-Kyu;Kim, Tae-Sun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.8
    • /
    • pp.111-117
    • /
    • 2020
  • Recently, with the high interest of machine learning, the need for educational controllers to interface with physical devices has increased. However, existing controllers are limited in terms of high cost and area of utilization for educational purposes. In this paper, motion control controllers using brain waves are proposed for the purpose of students' machine learning applications. The brain motion that occurs when imagining a specific action is measured and sampled, then the sample values were learned through Tensor Flow and the motion was recognized in contents such as games. Movement variation for motion recognition consists of directionality and jump motion. The identification of the recognition behavior is sent to a game produced by an Unreal Engine to operate the character in the game. In addition to brain waves, the implemented controller can be used in various fields depending on the input signal and can be used for educational purposes such as machine learning applications.

Beta and Alpha Regularizers of Mish Activation Functions for Machine Learning Applications in Deep Neural Networks

  • Mathayo, Peter Beatus;Kang, Dae-Ki
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.1
    • /
    • pp.136-141
    • /
    • 2022
  • A very complex task in deep learning such as image classification must be solved with the help of neural networks and activation functions. The backpropagation algorithm advances backward from the output layer towards the input layer, the gradients often get smaller and smaller and approach zero which eventually leaves the weights of the initial or lower layers nearly unchanged, as a result, the gradient descent never converges to the optimum. We propose a two-factor non-saturating activation functions known as Bea-Mish for machine learning applications in deep neural networks. Our method uses two factors, beta (𝛽) and alpha (𝛼), to normalize the area below the boundary in the Mish activation function and we regard these elements as Bea. Bea-Mish provide a clear understanding of the behaviors and conditions governing this regularization term can lead to a more principled approach for constructing better performing activation functions. We evaluate Bea-Mish results against Mish and Swish activation functions in various models and data sets. Empirical results show that our approach (Bea-Mish) outperforms native Mish using SqueezeNet backbone with an average precision (AP50val) of 2.51% in CIFAR-10 and top-1accuracy in ResNet-50 on ImageNet-1k. shows an improvement of 1.20%.

A Case Study on Machine Learning Applications and Performance Improvement in Learning Algorithm (기계학습 응용 및 학습 알고리즘 성능 개선방안 사례연구)

  • Lee, Hohyun;Chung, Seung-Hyun;Choi, Eun-Jung
    • Journal of Digital Convergence
    • /
    • v.14 no.2
    • /
    • pp.245-258
    • /
    • 2016
  • This paper aims to present the way to bring about significant results through performance improvement of learning algorithm in the research applying to machine learning. Research papers showing the results from machine learning methods were collected as data for this case study. In addition, suitable machine learning methods for each field were selected and suggested in this paper. As a result, SVM for engineering, decision-making tree algorithm for medical science, and SVM for other fields showed their efficiency in terms of their frequent use cases and classification/prediction. By analyzing cases of machine learning application, general characterization of application plans is drawn. Machine learning application has three steps: (1) data collection; (2) data learning through algorithm; and (3) significance test on algorithm. Performance is improved in each step by combining algorithm. Ways of performance improvement are classified as multiple machine learning structure modeling, $+{\alpha}$ machine learning structure modeling, and so forth.

Machine Learning-based Detection of HTTP DoS Attacks for Cloud Web Applications (머신러닝 기반 클라우드 웹 애플리케이션 HTTP DoS 공격 탐지)

  • Jae Han Cho;Jae Min Park;Tae Hyeop Kim;Seung Wook Lee;Jiyeon Kim
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.66-75
    • /
    • 2023
  • Recently, the number of cloud web applications is increasing owing to the accelerated migration of enterprises and public sector information systems to the cloud. Traditional network attacks on cloud web applications are characterized by Denial of Service (DoS) attacks, which consume network resources with a large number of packets. However, HTTP DoS attacks, which consume application resources, are also increasing recently; as such, developing security technologies to prevent them is necessary. In particular, since low-bandwidth HTTP DoS attacks do not consume network resources, they are difficult to identify using traditional security solutions that monitor network metrics. In this paper, we propose a new detection model for detecting HTTP DoS attacks on cloud web applications by collecting the application metrics of web servers and learning them using machine learning. We collected 18 types of application metrics from an Apache web server and used five machine learning and two deep learning models to train the collected data. Further, we confirmed the superiority of the application metrics-based machine learning model by collecting and training 6 additional network metrics and comparing their performance with the proposed models. Among HTTP DoS attacks, we injected the RUDY and HULK attacks, which are low- and high-bandwidth attacks, respectively. As a result of detecting these two attacks using the proposed model, we found out that the F1 scores of the application metrics-based machine learning model were about 0.3 and 0.1 higher than that of the network metrics-based model, respectively.

Android Malware Detection using Machine Learning Techniques KNN-SVM, DBN and GRU

  • Sk Heena Kauser;V.Maria Anu
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.202-209
    • /
    • 2023
  • Android malware is now on the rise, because of the rising interest in the Android operating system. Machine learning models may be used to classify unknown Android malware utilizing characteristics gathered from the dynamic and static analysis of an Android applications. Anti-virus software simply searches for the signs of the virus instance in a specific programme to detect it while scanning. Anti-virus software that competes with it keeps these in large databases and examines each file for all existing virus and malware signatures. The proposed model aims to provide a machine learning method that depend on the malware detection method for Android inability to detect malware apps and improve phone users' security and privacy. This system tracks numerous permission-based characteristics and events collected from Android apps and analyses them using a classifier model to determine whether the program is good ware or malware. This method used the machine learning techniques KNN-SVM, DBN, and GRU in which help to find the accuracy which gives the different values like KNN gives 87.20 percents accuracy, SVM gives 91.40 accuracy, Naive Bayes gives 85.10 and DBN-GRU Gives 97.90. Furthermore, in this paper, we simply employ standard machine learning techniques; but, in future work, we will attempt to improve those machine learning algorithms in order to develop a better detection algorithm.

Application Consideration of Machine Learning Techniques in Satellite Systems

  • Jin-keun Hong
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.48-60
    • /
    • 2024
  • With the exponential growth of satellite data utilization, machine learning has become pivotal in enhancing innovation and cybersecurity in satellite systems. This paper investigates the role of machine learning techniques in identifying and mitigating vulnerabilities and code smells within satellite software. We explore satellite system architecture and survey applications like vulnerability analysis, source code refactoring, and security flaw detection, emphasizing feature extraction methodologies such as Abstract Syntax Trees (AST) and Control Flow Graphs (CFG). We present practical examples of feature extraction and training models using machine learning techniques like Random Forests, Support Vector Machines, and Gradient Boosting. Additionally, we review open-access satellite datasets and address prevalent code smells through systematic refactoring solutions. By integrating continuous code review and refactoring into satellite software development, this research aims to improve maintainability, scalability, and cybersecurity, providing novel insights for the advancement of satellite software development and security. The value of this paper lies in its focus on addressing the identification of vulnerabilities and resolution of code smells in satellite software. In terms of the authors' contributions, we detail methods for applying machine learning to identify potential vulnerabilities and code smells in satellite software. Furthermore, the study presents techniques for feature extraction and model training, utilizing Abstract Syntax Trees (AST) and Control Flow Graphs (CFG) to extract relevant features for machine learning training. Regarding the results, we discuss the analysis of vulnerabilities, the identification of code smells, maintenance, and security enhancement through practical examples. This underscores the significant improvement in the maintainability and scalability of satellite software through continuous code review and refactoring.

Feasibility Study of Google's Teachable Machine in Diagnosis of Tooth-Marked Tongue

  • Jeong, Hyunja
    • Journal of dental hygiene science
    • /
    • v.20 no.4
    • /
    • pp.206-212
    • /
    • 2020
  • Background: A Teachable Machine is a kind of machine learning web-based tool for general persons. In this paper, the feasibility of Google's Teachable Machine (ver. 2.0) was studied in the diagnosis of the tooth-marked tongue. Methods: For machine learning of tooth-marked tongue diagnosis, a total of 1,250 tongue images were used on Kaggle's web site. Ninety percent of the images were used for the training data set, and the remaining 10% were used for the test data set. Using Google's Teachable Machine (ver. 2.0), machine learning was performed using separated images. To optimize the machine learning parameters, I measured the diagnosis accuracies according to the value of epoch, batch size, and learning rate. After hyper-parameter tuning, the ROC (receiver operating characteristic) analysis method determined the sensitivity (true positive rate, TPR) and specificity (false positive rate, FPR) of the machine learning model to diagnose the tooth-marked tongue. Results: To evaluate the usefulness of the Teachable Machine in clinical application, I used 634 tooth-marked tongue images and 491 no-marked tongue images for machine learning. When the epoch, batch size, and learning rate as hyper-parameters were 75, 0.0001, and 128, respectively, the accuracy of the tooth-marked tongue's diagnosis was best. The accuracies for the tooth-marked tongue and the no-marked tongue were 92.1% and 72.6%, respectively. And, the sensitivity (TPR) and specificity (FPR) were 0.92 and 0.28, respectively. Conclusion: These results are more accurate than Li's experimental results calculated with convolution neural network. Google's Teachable Machines show good performance by hyper-parameters tuning in the diagnosis of the tooth-marked tongue. We confirmed that the tool is useful for several clinical applications.

Research Trends of Ultra-reliable and Low-latency Machine Learning-based Wireless Communication Technology (기계학습기반 초신뢰·저지연 무선통신기술 연구동향)

  • Lee, H.;Kwon, D.S.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.3
    • /
    • pp.93-105
    • /
    • 2019
  • This study emphasizes the importance of the newly added Ultra-Reliable and Low-Latency Communications (URLLC) service as an important evolutionary step for 5G mobile communication, and proposes a remedial application. We analyze the requirements for the application of 5G mobile communication technology in high-precision vertical industries and applications, introduce the 5G URLLC design principles and standards of 3GPP, and summarize the current state of applied artificial intelligence technology in wireless communication. Additionally, we summarize the current state of research on ultra-reliable and low-latency machine learning-based wireless communication technology for application in ultra-high-precision vertical industries and applications. Furthermore, we discuss the technological direction of artificial intelligence technology for URLLC wireless communication.

Scoping Review of Machine Learning and Deep Learning Algorithm Applications in Veterinary Clinics: Situation Analysis and Suggestions for Further Studies

  • Kyung-Duk Min
    • Journal of Veterinary Clinics
    • /
    • v.40 no.4
    • /
    • pp.243-259
    • /
    • 2023
  • Machine learning and deep learning (ML/DL) algorithms have been successfully applied in medical practice. However, their application in veterinary medicine is relatively limited, possibly due to a lack in the quantity and quality of relevant research. Because the potential demands for ML/DL applications in veterinary clinics are significant, it is important to note the current gaps in the literature and explore the possible directions for advancement in this field. Thus, a scoping review was conducted as a situation analysis. We developed a search strategy following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. PubMed and Embase databases were used in the initial search. The identified items were screened based on predefined inclusion and exclusion criteria. Information regarding model development, quality of validation, and model performance was extracted from the included studies. The current review found 55 studies that passed the criteria. In terms of target animals, the number of studies on industrial animals was similar to that on companion animals. Quantitative scarcity of prediction studies (n = 11, including duplications) was revealed in both industrial and non-industrial animal studies compared to diagnostic studies (n = 45, including duplications). Qualitative limitations were also identified, especially regarding validation methodologies. Considering these gaps in the literature, future studies examining the prediction and validation processes, which employ a prospective and multi-center approach, are highly recommended. Veterinary practitioners should acknowledge the current limitations in this field and adopt a receptive and critical attitude towards these new technologies to avoid their abuse.

Machine learning-based prediction of wind forces on CAARC standard tall buildings

  • Yi Li;Jie-Ting Yin;Fu-Bin Chen;Qiu-Sheng Li
    • Wind and Structures
    • /
    • v.36 no.6
    • /
    • pp.355-366
    • /
    • 2023
  • Although machine learning (ML) techniques have been widely used in various fields of engineering practice, their applications in the field of wind engineering are still at the initial stage. In order to evaluate the feasibility of machine learning algorithms for prediction of wind loads on high-rise buildings, this study took the exposure category type, wind direction and the height of local wind force as the input features and adopted four different machine learning algorithms including k-nearest neighbor (KNN), support vector machine (SVM), gradient boosting regression tree (GBRT) and extreme gradient (XG) boosting to predict wind force coefficients of CAARC standard tall building model. All the hyper-parameters of four ML algorithms are optimized by tree-structured Parzen estimator (TPE). The result shows that mean drag force coefficients and RMS lift force coefficients can be well predicted by the GBRT algorithm model while the RMS drag force coefficients can be forecasted preferably by the XG boosting algorithm model. The proposed machine learning based algorithms for wind loads prediction can be an alternative of traditional wind tunnel tests and computational fluid dynamic simulations.