• Title/Summary/Keyword: deep machine learning

Search Result 1,093, Processing Time 0.027 seconds

Design of the Management System for Students at Risk of Dropout using Machine Learning (머신러닝을 이용한 학업중단 위기학생 관리시스템의 설계)

  • Ban, Chae-Hoon;Kim, Dong-Hyun;Ha, Jong-Soo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.6
    • /
    • pp.1255-1262
    • /
    • 2021
  • The proportion of students dropping out of universities is increasing year by year, and they are trying to identify risk factors and eliminate them in advance to prevent dropouts. However, there is a problem in the management of students at risk of dropping out and the forecast is inaccurate because crisis students are managed through the univariable analysis of specific risk factors. In this paper, we identify risk factors for university dropout and analyze multivariables through machine learning method to predict university dropout. In addition, we derive the optimization method by evaluation performance for various prediction methods and evaluate the correlation and contribution between risk factors that cause university dropout.

Estimation of Automatic Video Captioning in Real Applications using Machine Learning Techniques and Convolutional Neural Network

  • Vaishnavi, J;Narmatha, V
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.316-326
    • /
    • 2022
  • The prompt development in the field of video is the outbreak of online services which replaces the television media within a shorter period in gaining popularity. The online videos are encouraged more in use due to the captions displayed along with the scenes for better understandability. Not only entertainment media but other marketing companies and organizations are utilizing videos along with captions for their product promotions. The need for captions is enabled for its usage in many ways for hearing impaired and non-native people. Research is continued in an automatic display of the appropriate messages for the videos uploaded in shows, movies, educational videos, online classes, websites, etc. This paper focuses on two concerns namely the first part dealing with the machine learning method for preprocessing the videos into frames and resizing, the resized frames are classified into multiple actions after feature extraction. For the feature extraction statistical method, GLCM and Hu moments are used. The second part deals with the deep learning method where the CNN architecture is used to acquire the results. Finally both the results are compared to find the best accuracy where CNN proves to give top accuracy of 96.10% in classification.

XAI Research Trends Using Social Network Analysis and Topic Modeling (소셜 네트워크 분석과 토픽 모델링을 활용한 설명 가능 인공지능 연구 동향 분석)

  • Gun-doo Moon;Kyoung-jae Kim
    • Journal of Information Technology Applications and Management
    • /
    • v.30 no.1
    • /
    • pp.53-70
    • /
    • 2023
  • Artificial intelligence has become familiar with modern society, not the distant future. As artificial intelligence and machine learning developed more highly and became more complicated, it became difficult for people to grasp its structure and the basis for decision-making. It is because machine learning only shows results, not the whole processes. As artificial intelligence developed and became more common, people wanted the explanation which could provide them the trust on artificial intelligence. This study recognized the necessity and importance of explainable artificial intelligence, XAI, and examined the trends of XAI research by analyzing social networks and analyzing topics with IEEE published from 2004, when the concept of artificial intelligence was defined, to 2022. Through social network analysis, the overall pattern of nodes can be found in a large number of documents and the connection between keywords shows the meaning of the relationship structure, and topic modeling can identify more objective topics by extracting keywords from unstructured data and setting topics. Both analysis methods are suitable for trend analysis. As a result of the analysis, it was found that XAI's application is gradually expanding in various fields as well as machine learning and deep learning.

5G Network Resource Allocation and Traffic Prediction based on DDPG and Federated Learning (DDPG 및 연합학습 기반 5G 네트워크 자원 할당과 트래픽 예측)

  • Seok-Woo Park;Oh-Sung Lee;In-Ho Ra
    • Smart Media Journal
    • /
    • v.13 no.4
    • /
    • pp.33-48
    • /
    • 2024
  • With the advent of 5G, characterized by Enhanced Mobile Broadband (eMBB), Ultra-Reliable Low Latency Communications (URLLC), and Massive Machine Type Communications (mMTC), efficient network management and service provision are becoming increasingly critical. This paper proposes a novel approach to address key challenges of 5G networks, namely ultra-high speed, ultra-low latency, and ultra-reliability, while dynamically optimizing network slicing and resource allocation using machine learning (ML) and deep learning (DL) techniques. The proposed methodology utilizes prediction models for network traffic and resource allocation, and employs Federated Learning (FL) techniques to simultaneously optimize network bandwidth, latency, and enhance privacy and security. Specifically, this paper extensively covers the implementation methods of various algorithms and models such as Random Forest and LSTM, thereby presenting methodologies for the automation and intelligence of 5G network operations. Finally, the performance enhancement effects achievable by applying ML and DL to 5G networks are validated through performance evaluation and analysis, and solutions for network slicing and resource management optimization are proposed for various industrial applications.

Malwares Attack Detection Using Ensemble Deep Restricted Boltzmann Machine

  • K. Janani;R. Gunasundari
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.64-72
    • /
    • 2024
  • In recent times cyber attackers can use Artificial Intelligence (AI) to boost the sophistication and scope of attacks. On the defense side, AI is used to enhance defense plans, to boost the robustness, flexibility, and efficiency of defense systems, which means adapting to environmental changes to reduce impacts. With increased developments in the field of information and communication technologies, various exploits occur as a danger sign to cyber security and these exploitations are changing rapidly. Cyber criminals use new, sophisticated tactics to boost their attack speed and size. Consequently, there is a need for more flexible, adaptable and strong cyber defense systems that can identify a wide range of threats in real-time. In recent years, the adoption of AI approaches has increased and maintained a vital role in the detection and prevention of cyber threats. In this paper, an Ensemble Deep Restricted Boltzmann Machine (EDRBM) is developed for the classification of cybersecurity threats in case of a large-scale network environment. The EDRBM acts as a classification model that enables the classification of malicious flowsets from the largescale network. The simulation is conducted to test the efficacy of the proposed EDRBM under various malware attacks. The simulation results show that the proposed method achieves higher classification rate in classifying the malware in the flowsets i.e., malicious flowsets than other methods.

Image Enhanced Machine Vision System for Smart Factory

  • Kim, ByungJoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.7-13
    • /
    • 2021
  • Machine vision is a technology that helps the computer as if a person recognizes and determines things. In recent years, as advanced technologies such as optical systems, artificial intelligence and big data advanced in conventional machine vision system became more accurate quality inspection and it increases the manufacturing efficiency. In machine vision systems using deep learning, the image quality of the input image is very important. However, most images obtained in the industrial field for quality inspection typically contain noise. This noise is a major factor in the performance of the machine vision system. Therefore, in order to improve the performance of the machine vision system, it is necessary to eliminate the noise of the image. There are lots of research being done to remove noise from the image. In this paper, we propose an autoencoder based machine vision system to eliminate noise in the image. Through experiment proposed model showed better performance compared to the basic autoencoder model in denoising and image reconstruction capability for MNIST and fashion MNIST data sets.

Traffic Data Generation Technique for Improving Network Attack Detection Using Deep Learning (네트워크 공격 탐지 성능향상을 위한 딥러닝을 이용한 트래픽 데이터 생성 연구)

  • Lee, Wooho;Hahm, Jaegyoon;Jung, Hyun Mi;Jeong, Kimoon
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.11
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, various approaches to detect network attacks using machine learning have been studied and are being applied to detect new attacks and to increase precision. However, the machine learning method is dependent on feature extraction and takes a long time and complexity. It also has limitation of performace due to learning data imbalance. In this study, we propose a method to solve the degradation of classification performance due to imbalance of learning data among the limit points of detection system. To do this, we generate data using Generative Adversarial Networks (GANs) and propose a classification method using Convolutional Neural Networks (CNNs). Through this approach, we can confirm that the accuracy is improved when applied to the NSL-KDD and UNSW-NB15 datasets.

Protein Disorder Prediction Using Multilayer Perceptrons

  • Oh, Sang-Hoon
    • International Journal of Contents
    • /
    • v.9 no.4
    • /
    • pp.11-15
    • /
    • 2013
  • "Protein Folding Problem" is considered to be one of the "Great Challenges of Computer Science" and prediction of disordered protein is an important part of the protein folding problem. Machine learning models can predict the disordered structure of protein based on its characteristic of "learning from examples". Among many machine learning models, we investigate the possibility of multilayer perceptron (MLP) as the predictor of protein disorder. The investigation includes a single hidden layer MLP, multi hidden layer MLP and the hierarchical structure of MLP. Also, the target node cost function which deals with imbalanced data is used as training criteria of MLPs. Based on the investigation results, we insist that MLP should have deep architectures for performance improvement of protein disorder prediction.

Data-Driven-Based Beam Selection for Hybrid Beamforming in Ultra-Dense Networks

  • Ju, Sang-Lim;Kim, Kyung-Seok
    • International journal of advanced smart convergence
    • /
    • v.9 no.2
    • /
    • pp.58-67
    • /
    • 2020
  • In this paper, we propose a data-driven-based beam selection scheme for massive multiple-input and multiple-output (MIMO) systems in ultra-dense networks (UDN), which is capable of addressing the problem of high computational cost of conventional coordinated beamforming approaches. We consider highly dense small-cell scenarios with more small cells than mobile stations, in the millimetre-wave band. The analog beam selection for hybrid beamforming is a key issue in realizing millimetre-wave UDN MIMO systems. To reduce the computation complexity for the analog beam selection, in this paper, two deep neural network models are used. The channel samples, channel gains, and radio frequency beamforming vectors between the access points and mobile stations are collected at the central/cloud unit that is connected to all the small-cell access points, and are used to train the networks. The proposed machine-learning-based scheme provides an approach for the effective implementation of massive MIMO system in UDN environment.

Next-Generation Personal Authentication Scheme Based on EEG Signal and Deep Learning

  • Yang, Gi-Chul
    • Journal of Information Processing Systems
    • /
    • v.16 no.5
    • /
    • pp.1034-1047
    • /
    • 2020
  • The personal authentication technique is an essential tool in this complex and modern digital information society. Traditionally, the most general mechanism of personal authentication was using alphanumeric passwords. However, passwords that are hard to guess or to break, are often hard to remember. There are demands for a technology capable of replacing the text-based password system. Graphical passwords can be an alternative, but it is vulnerable to shoulder-surfing attacks. This paper looks through a number of recently developed graphical password systems and introduces a personal authentication system using a machine learning technique with electroencephalography (EEG) signals as a new type of personal authentication system which is easier for a person to use and more difficult for others to steal than other preexisting authentication systems.