• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.027 seconds

Guideline on Security Measures and Implementation of Power System Utilizing AI Technology (인공지능을 적용한 전력 시스템을 위한 보안 가이드라인)

  • Choi, Inji;Jang, Minhae;Choi, Moonsuk
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.6 no.4
    • /
    • pp.399-404
    • /
    • 2020
  • There are many attempts to apply AI technology to diagnose facilities or improve the work efficiency of the power industry. The emergence of new machine learning technologies, such as deep learning, is accelerating the digital transformation of the power sector. The problem is that traditional power systems face security risks when adopting state-of-the-art AI systems. This adoption has convergence characteristics and reveals new cybersecurity threats and vulnerabilities to the power system. This paper deals with the security measures and implementations of the power system using machine learning. Through building a commercial facility operations forecasting system using machine learning technology utilizing power big data, this paper identifies and addresses security vulnerabilities that must compensated to protect customer information and power system safety. Furthermore, it provides security guidelines by generalizing security measures to be considered when applying AI.

Speech Emotion Recognition Based on Deep Networks: A Review (딥네트워크 기반 음성 감정인식 기술 동향)

  • Mustaqeem, Mustaqeem;Kwon, Soonil
    • Annual Conference of KIPS
    • /
    • 2021.05a
    • /
    • pp.331-334
    • /
    • 2021
  • In the latest eras, there has been a significant amount of development and research is done on the usage of Deep Learning (DL) for speech emotion recognition (SER) based on Convolutional Neural Network (CNN). These techniques are usually focused on utilizing CNN for an application associated with emotion recognition. Moreover, numerous mechanisms are deliberated that is based on deep learning, meanwhile, it's important in the SER-based human-computer interaction (HCI) applications. Associating with other methods, the methods created by DL are presenting quite motivating results in many fields including automatic speech recognition. Hence, it appeals to a lot of studies and investigations. In this article, a review with evaluations is illustrated on the improvements that happened in the SER domain though likewise arguing the existing studies that are existence SER based on DL and CNN methods.

A Study on Intrusion Detection Using Deep Learning-based Weight Measurement with Multimode Fiber Speckle Patterns

  • Hyuek Jae Lee
    • Current Optics and Photonics
    • /
    • v.8 no.5
    • /
    • pp.508-514
    • /
    • 2024
  • This paper presents a deep learning-based weight sensor, using optical speckle patterns of multimode fiber, designed for real-time intrusion detection. The weight sensor has been trained to identify 11 distinct speckle patterns, ranging in weight from 0.0 kg to 2.0 kg, with an interval of 200 g between each pattern. The estimation for untrained weights is based on the generalization capability of deep learning. This results in an average weight error of 243.8 g. Although this margin of error precludes accurate weight measurement, the system's ability to detect abrupt weight changes makes it a suitable choice for intrusion detection applications. The weight sensor is integrated with the Google Teachable Machine, and real-time intrusion notifications are facilitated by the ThingSpeakTM cloud platform, an open-source Internet of Things (IoT) application developed by MathWorks.

Deep-learning based In-situ Monitoring and Prediction System for the Organic Light Emitting Diode

  • Park, Il-Hoo;Cho, Hyeran;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.126-129
    • /
    • 2020
  • We introduce a lifetime assessment technique using deep learning algorithm with complex electrical parameters such as resistivity, permittivity, impedance parameters as integrated indicators for predicting the degradation of the organic molecules. The evaluation system consists of fully automated in-situ measurement system and multiple layer perceptron learning system with five hidden layers and 1011 perceptra in each layer. Prediction accuracies are calculated and compared depending on the physical feature, learning hyperparameters. 62.5% of full time-series data are used for training and its prediction accuracy is estimated as r-square value of 0.99. Remaining 37.5% of the data are used for testing with prediction accuracy of 0.95. With k-fold cross-validation, the stability to the instantaneous changes in the measured data is also improved.

Deep recurrent neural networks with word embeddings for Urdu named entity recognition

  • Khan, Wahab;Daud, Ali;Alotaibi, Fahd;Aljohani, Naif;Arafat, Sachi
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.90-100
    • /
    • 2020
  • Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state-of-the-art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short-term memory and back propagation through time approaches. The proposed models consider both language-dependent features, such as part-of-speech tags, and language-independent features, such as the "context windows" of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f-measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively.

A study on the standardization strategy for building of learning data set for machine learning applications (기계학습 활용을 위한 학습 데이터세트 구축 표준화 방안에 관한 연구)

  • Choi, JungYul
    • Journal of Digital Convergence
    • /
    • v.16 no.10
    • /
    • pp.205-212
    • /
    • 2018
  • With the development of high performance CPU / GPU, artificial intelligence algorithms such as deep neural networks, and a large amount of data, machine learning has been extended to various applications. In particular, a large amount of data collected from the Internet of Things, social network services, web pages, and public data is accelerating the use of machine learning. Learning data sets for machine learning exist in various formats according to application fields and data types, and thus it is difficult to effectively process data and apply them to machine learning. Therefore, this paper studied a method for building a learning data set for machine learning in accordance with standardized procedures. This paper first analyzes the requirement of learning data set according to problem types and data types. Based on the analysis, this paper presents the reference model to build learning data set for machine learning applications. This paper presents the target standardization organization and a standard development strategy for building learning data set.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.

Enhanced Network Intrusion Detection using Deep Convolutional Neural Networks

  • Naseer, Sheraz;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.5159-5178
    • /
    • 2018
  • Network Intrusion detection is a rapidly growing field of information security due to its importance for modern IT infrastructure. Many supervised and unsupervised learning techniques have been devised by researchers from discipline of machine learning and data mining to achieve reliable detection of anomalies. In this paper, a deep convolutional neural network (DCNN) based intrusion detection system (IDS) is proposed, implemented and analyzed. Deep CNN core of proposed IDS is fine-tuned using Randomized search over configuration space. Proposed system is trained and tested on NSLKDD training and testing datasets using GPU. Performance comparisons of proposed DCNN model are provided with other classifiers using well-known metrics including Receiver operating characteristics (RoC) curve, Area under RoC curve (AuC), accuracy, precision-recall curve and mean average precision (mAP). The experimental results of proposed DCNN based IDS shows promising results for real world application in anomaly detection systems.

An Improved Intrusion Detection System for SDN using Multi-Stage Optimized Deep Forest Classifier

  • Saritha Reddy, A;Ramasubba Reddy, B;Suresh Babu, A
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.374-386
    • /
    • 2022
  • Nowadays, research in deep learning leveraged automated computing and networking paradigm evidenced rapid contributions in terms of Software Defined Networking (SDN) and its diverse security applications while handling cybercrimes. SDN plays a vital role in sniffing information related to network usage in large-scale data centers that simultaneously support an improved algorithm design for automated detection of network intrusions. Despite its security protocols, SDN is considered contradictory towards DDoS attacks (Distributed Denial of Service). Several research studies developed machine learning-based network intrusion detection systems addressing detection and mitigation of DDoS attacks in SDN-based networks due to dynamic changes in various features and behavioral patterns. Addressing this problem, this research study focuses on effectively designing a multistage hybrid and intelligent deep learning classifier based on modified deep forest classification to detect DDoS attacks in SDN networks. Experimental results depict that the performance accuracy of the proposed classifier is improved when evaluated with standard parameters.

FORECASTING GOLD FUTURES PRICES CONSIDERING THE BENCHMARK INTEREST RATES

  • Lee, Donghui;Kim, Donghyun;Yoon, Ji-Hun
    • Journal of the Chungcheong Mathematical Society
    • /
    • v.34 no.2
    • /
    • pp.157-168
    • /
    • 2021
  • This study uses the benchmark interest rate of the Federal Open Market Committee (FOMC) to predict gold futures prices. For the predictions, we used the support vector machine (SVM) (a machine-learning model) and the long short-term memory (LSTM) deep-learning model. We found that the LSTM method is more accurate than the SVM method. Moreover, we applied the Boruta algorithm to demonstrate that the FOMC benchmark interest rates correlate with gold futures.