• Title/Summary/Keyword: machine learning applications

Search Result 540, Processing Time 0.023 seconds

Computer Architecture Execution Time Optimization Using Swarm in Machine Learning

  • Sarah AlBarakati;Sally AlQarni;Rehab K. Qarout;Kaouther Laabidi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.10
    • /
    • pp.49-56
    • /
    • 2023
  • Computer architecture serves as a link between application requirements and underlying technology capabilities such as technical, mathematical, medical, and business applications' computational and storage demands are constantly increasing. Machine learning these days grown and used in many fields and it performed better than traditional computing in applications that need to be implemented by using mathematical algorithms. A mathematical algorithm requires more extensive and quicker calculations, higher computer architecture specification, and takes longer execution time. Therefore, there is a need to improve the use of computer hardware such as CPU, memory, etc. optimization has a main role to reduce the execution time and improve the utilization of computer recourses. And for the importance of execution time in implementing machine learning supervised module linear regression, in this paper we focus on optimizing machine learning algorithms, for this purpose we write a (Diabetes prediction program) and applying on it a Practical Swarm Optimization (PSO) to reduce the execution time and improve the utilization of computer resources. Finally, a massive improvement in execution time were observed.

A Study on Machine Learning Compiler and Modulo Scheduler (머신러닝 컴파일러와 모듈로 스케쥴러에 관한 연구)

  • Doosan Cho
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.1
    • /
    • pp.87-95
    • /
    • 2024
  • This study is on modulo scheduling algorithms for multicore processor in machine learning applications. Machine learning algorithms are designed to perform a large amount of operations such as vectors and matrices in order to quickly process large amounts of data stream. To support such large amounts of computations, processor architectures to support applications such as artificial intelligence, neural networks, and machine learning are designed in the form of parallel processing such as multicore. To effectively utilize these multi-core hardware resources, various compiler techniques are being used and studied. In this study, among these compiler techniques, we analyzed the modular scheduler, which is especially important in one core's computation pipeline. This paper looked at and compared the iterative modular scheduler and the swing modular scheduler, which are the most widely used and studied. As a result, both schedulers provided similar performance results, and when measuring register pressure as an indicator, it was confirmed that the swing modulo scheduler provided slightly better performance. In this study, a technique that divides recurrence edge is proposed to improve the minimum initiation interval of the modulo schedulers.

Learning of Adaptive Behavior of artificial Ant Using Classifier System (분류자 시스템을 이용한 인공개미의 적응행동의 학습)

  • 정치선;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.361-367
    • /
    • 1998
  • The main two applications of the Genetic Algorithms(GA) are the optimization and the machine learning. Machine Learning has two objectives that make the complex system learn its environment and produce the proper output of a system. The machine learning using the Genetic Algorithms is called GA machine learning or genetic-based machine learning (GBML). The machine learning is different from the optimization problems in finding the rule set. In optimization problems, the population of GA should converge into the best individual because optimization problems, the population of GA should converge into the best individual because their objective is the production of the individual near the optimal solution. On the contrary, the machine learning systems need to find the set of cooperative rules. There are two methods in GBML, Michigan method and Pittsburgh method. The former is that each rule is expressed with a string, the latter is that the set of rules is coded into a string. Th classifier system of Holland is the representative model of the Michigan method. The classifier systems arrange the strength of classifiers of classifier list using the message list. In this method, the real time process and on-line learning is possible because a set of rule is adjusted on-line. A classifier system has three major components: Performance system, apportionment of credit system, rule discovery system. In this paper, we solve the food search problem with the learning and evolution of an artificial ant using the learning classifier system.

  • PDF

Android Botnet Detection Using Hybrid Analysis

  • Mamoona Arhsad;Ahmad Karim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.704-719
    • /
    • 2024
  • Botnet pandemics are becoming more prevalent with the growing use of mobile phone technologies. Mobile phone technologies provide a wide range of applications, including entertainment, commerce, education, and finance. In addition, botnet refers to the collection of compromised devices managed by a botmaster and engaging with each other via a command server to initiate an attack including phishing email, ad-click fraud, blockchain, and much more. As the number of botnet attacks rises, detecting harmful activities is becoming more challenging in handheld devices. Therefore, it is crucial to evaluate mobile botnet assaults to find the security vulnerabilities that occur through coordinated command servers causing major financial and ethical harm. For this purpose, we propose a hybrid analysis approach that integrates permissions and API and experiments on the machine-learning classifiers to detect mobile botnet applications. In this paper, the experiment employed benign, botnet, and malware applications for validation of the performance and accuracy of classifiers. The results conclude that a classifier model based on a simple decision tree obtained 99% accuracy with a low 0.003 false-positive rate than other machine learning classifiers for botnet applications detection. As an outcome of this paper, a hybrid approach enhances the accuracy of mobile botnet detection as compared to static and dynamic features when both are taken separately.

The Development of a Rainfall Correction Technique based on Machine Learning for Hydrological Applications (수문학적 활용을 위한 머신러닝 기반의 강우보정기술 개발)

  • Lee, Young-Mi;Ko, Chul-Min;Shin, Seong-Cheol;Kim, Byung-Sik
    • Journal of Environmental Science International
    • /
    • v.28 no.1
    • /
    • pp.125-135
    • /
    • 2019
  • For the purposes of enhancing usability of Numerical Weather Prediction (NWP), the quantitative precipitation prediction scheme by machine learning has been proposed. In this study, heavy rainfall was corrected for by utilizing rainfall predictors from LENS and Radar from 2017 to 2018, as well as machine learning tools LightGBM and XGBoost. The results were analyzed using Mean Absolute Error (MAE), Normalized Peak Error (NPE), and Peak Timing Error (PTE) for rainfall corrected through machine learning. Machine learning results (i.e. using LightGBM and XGBoost) showed improvements in the overall correction of rainfall and maximum rainfall compared to LENS. For example, the MAE of case 5 was found to be 24.252 using LENS, 11.564 using LightGBM, and 11.693 using XGBoost, showing excellent error improvement in machine learning results. This rainfall correction technique can provide hydrologically meaningful rainfall information such as predictions of flooding. Future research on the interpretation of various hydrologic processes using machine learning is necessary.

Realtime Analysis of Sasang Constitution Types from Facial Features Using Computer Vision and Machine Learning

  • Abdullah;Shah Mahsoom Ali;Hee-Cheol Kim
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.3
    • /
    • pp.256-266
    • /
    • 2024
  • Sasang constitutional medicine (SCM) is one of the best traditional therapeutic approaches used in Korea. SCM prioritizes personalized treatment that considers the unique constitution of an individual and encompasses their physical characteristics, personality traits, and susceptibility to specific diseases. Facial features are essential for diagnosing Sasang constitutional types (SCTs). This study aimed to develop a real-time artificial intelligence-based model for diagnosing SCTs using facial images, building an SCTs prediction model based on a machine learning method. Facial features from all images were extracted to develop this model using feature engineering and machine learning techniques. The fusion of these features was used to train the AI model. We used four machine learning algorithms, namely, random forest (RF), multilayer perceptron (MLP), gradient boosting machine (GBM), and extreme gradient boosting (XGB), to investigate SCTs. The GBM outperformed all the other models. The highest accuracy achieved in the experiment was 81%, indicating the robustness of the proposed model and suitability for real-time applications.

Underwater Acoustic Research Trends with Machine Learning: General Background

  • Yang, Haesang;Lee, Keunhwa;Choo, Youngmin;Kim, Kookhyun
    • Journal of Ocean Engineering and Technology
    • /
    • v.34 no.2
    • /
    • pp.147-154
    • /
    • 2020
  • Underwater acoustics that is the study of the phenomenon of underwater wave propagation and its interaction with boundaries, has mainly been applied to the fields of underwater communication, target detection, marine resources, marine environment, and underwater sound sources. Based on the scientific and engineering understanding of acoustic signals/data, recent studies combining traditional and data-driven machine learning methods have shown continuous progress. Machine learning, represented by deep learning, has shown unprecedented success in a variety of fields, owing to big data, graphical processor unit computing, and advances in algorithms. Although machine learning has not yet been implemented in every single field of underwater acoustics, it will be used more actively in the future in line with the ongoing development and overwhelming achievements of this method. To understand the research trends of machine learning applications in underwater acoustics, the general theoretical background of several related machine learning techniques is introduced in this paper.

Analyzing Machine Learning Techniques for Fault Prediction Using Web Applications

  • Malhotra, Ruchika;Sharma, Anjali
    • Journal of Information Processing Systems
    • /
    • v.14 no.3
    • /
    • pp.751-770
    • /
    • 2018
  • Web applications are indispensable in the software industry and continuously evolve either meeting a newer criteria and/or including new functionalities. However, despite assuring quality via testing, what hinders a straightforward development is the presence of defects. Several factors contribute to defects and are often minimized at high expense in terms of man-hours. Thus, detection of fault proneness in early phases of software development is important. Therefore, a fault prediction model for identifying fault-prone classes in a web application is highly desired. In this work, we compare 14 machine learning techniques to analyse the relationship between object oriented metrics and fault prediction in web applications. The study is carried out using various releases of Apache Click and Apache Rave datasets. En-route to the predictive analysis, the input basis set for each release is first optimized using filter based correlation feature selection (CFS) method. It is found that the LCOM3, WMC, NPM and DAM metrics are the most significant predictors. The statistical analysis of these metrics also finds good conformity with the CFS evaluation and affirms the role of these metrics in the defect prediction of web applications. The overall predictive ability of different fault prediction models is first ranked using Friedman technique and then statistically compared using Nemenyi post-hoc analysis. The results not only upholds the predictive capability of machine learning models for faulty classes using web applications, but also finds that ensemble algorithms are most appropriate for defect prediction in Apache datasets. Further, we also derive a consensus between the metrics selected by the CFS technique and the statistical analysis of the datasets.

Design of Block-based Modularity Architecture for Machine Learning (머신러닝을 위한 블록형 모듈화 아키텍처 설계)

  • Oh, Yoosoo
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.476-482
    • /
    • 2020
  • In this paper, we propose a block-based modularity architecture design method for distributed machine learning. The proposed architecture is a block-type module structure with various machine learning algorithms. It allows free expansion between block-type modules and allows multiple machine learning algorithms to be organically interlocked according to the situation. The architecture enables open data communication using the metadata query protocol. Also, the architecture makes it easy to implement an application service combining various edge computing devices by designing a communication method suitable for surrounding applications. To confirm the interlocking between the proposed block-type modules, we implemented a hardware-based modularity application system.

Improving Performance of Machine Learning-based Haze Removal Algorithms with Enhanced Training Database

  • Ngo, Dat;Kang, Bongsoon
    • Journal of IKEEE
    • /
    • v.22 no.4
    • /
    • pp.948-952
    • /
    • 2018
  • Haze removal is an object of scientific desire due to its various practical applications. Existing algorithms are founded upon histogram equalization, contrast maximization, or the growing trend of applying machine learning in image processing. Since machine learning-based algorithms solve problems based on the data, they usually perform better than those based on traditional image processing/computer vision techniques. However, to achieve such a high performance, one of the requisites is a large and reliable training database, which seems to be unattainable owing to the complexity of real hazy and haze-free images acquisition. As a result, researchers are currently using the synthetic database, obtained by introducing the synthetic haze drawn from the standard uniform distribution into the clear images. In this paper, we propose the enhanced equidistribution, improving upon our previous study on equidistribution, and use it to make a new database for training machine learning-based haze removal algorithms. A large number of experiments verify the effectiveness of our proposed methodology.