• 제목/요약/키워드: deep machine learning

Search Result 1,093, Processing Time 0.025 seconds

Source Tracking Models on Chemical Leaks for Emergency Response in Chemical Plants Based on Deep Learning of Big Data (화학공장 누출사고 대응을 위한 빅데이터-딥러닝 누출원 추적모델)

  • Kim, Hyunseung;Shin, Dongil
    • Proceedings of the Korean Society of Disaster Information Conference
    • /
    • 2017.11a
    • /
    • pp.339-340
    • /
    • 2017
  • 화학공장의 누출사고는 초기에 적절히 대응하지 못할 경우 화재 폭발과 같은 2차 3차의 복합재난사고로 확산될 위험성이 매우 높다. 이러한 이유로 누출사고 발생 초기에 누출이 발생한 지점을 신속히 파악하여 현장안전요원에게 알림으로써, 보다 체계적이고 효율적인 초기대응을 가능하게 하여, 사고피해를 완화시킬 수 있는 통합적인 누출사고 대응시스템 구축은 매우 중요하다고 할 수 있다. 본 연구에서는, 통합적인 누출사고 대응시스템 구축을 위한 선행연구로, 딥러닝 기반의 누출원추적 모델 개발을 제안한다. 여수에 위치한 실제 화학공장을 대상으로 누출사고 시나리오에 대한 Computational Fluid Dynamics (CFD) 시뮬레이션을 진행한 뒤, 화학공장 경계면에 배치된 각 센서별 위치에서의 농도, 풍향 그리고 풍속데이터를 추출하고, 센서 좌표를 추가하여 인공신경망을 학습시켰다. 학습된 모델은 40개의 누출후보군에 대해 학습에 사용되지 않은 상황들에서도 75.43%의 정확도로 누출이 일어난 지점을 실시간 예측해냄을 확인하였다. 또한 누출지점 예측이 일치하지 않은 경우도, 예측된 지점이 실제 누출이 일어난 지점과 물리적으로 매우 인접함을 확인함으로써 제안된 모델을 실제 현장에 적용할시 기대되는 효과는 더 클 것으로 판단하였다.

  • PDF

CNN-based Android Malware Detection Using Reduced Feature Set

  • Kim, Dong-Min;Lee, Soo-jin
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.10
    • /
    • pp.19-26
    • /
    • 2021
  • The performance of deep learning-based malware detection and classification models depends largely on how to construct a feature set to be applied to training. In this paper, we propose an approach to select the optimal feature set to maximize detection performance for CNN-based Android malware detection. The features to be included in the feature set were selected through the Chi-Square test algorithm, which is widely used for feature selection in machine learning and deep learning. To validate the proposed approach, the CNN model was trained using 36 characteristics selected for the CICANDMAL2017 dataset and then the malware detection performance was measured. As a result, 99.99% of Accuracy was achieved in binary classification and 98.55% in multiclass classification.

Phishing Attack Detection Using Deep Learning

  • Alzahrani, Sabah M.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.12
    • /
    • pp.213-218
    • /
    • 2021
  • This paper proposes a technique for detecting a significant threat that attempts to get sensitive and confidential information such as usernames, passwords, credit card information, and more to target an individual or organization. By definition, a phishing attack happens when malicious people pose as trusted entities to fraudulently obtain user data. Phishing is classified as a type of social engineering attack. For a phishing attack to happen, a victim must be convinced to open an email or a direct message [1]. The email or direct message will contain a link that the victim will be required to click on. The aim of the attack is usually to install malicious software or to freeze a system. In other instances, the attackers will threaten to reveal sensitive information obtained from the victim. Phishing attacks can have devastating effects on the victim. Sensitive and confidential information can find its way into the hands of malicious people. Another devastating effect of phishing attacks is identity theft [1]. Attackers may impersonate the victim to make unauthorized purchases. Victims also complain of loss of funds when attackers access their credit card information. The proposed method has two major subsystems: (1) Data collection: different websites have been collected as a big data corresponding to normal and phishing dataset, and (2) distributed detection system: different artificial algorithms are used: a neural network algorithm and machine learning. The Amazon cloud was used for running the cluster with different cores of machines. The experiment results of the proposed system achieved very good accuracy and detection rate as well.

PharmacoNER Tagger: a deep learning-based tool for automatically finding chemicals and drugs in Spanish medical texts

  • Armengol-Estape, Jordi;Soares, Felipe;Marimon, Montserrat;Krallinger, Martin
    • Genomics & Informatics
    • /
    • v.17 no.2
    • /
    • pp.15.1-15.7
    • /
    • 2019
  • Automatically detecting mentions of pharmaceutical drugs and chemical substances is key for the subsequent extraction of relations of chemicals with other biomedical entities such as genes, proteins, diseases, adverse reactions or symptoms. The identification of drug mentions is also a prior step for complex event types such as drug dosage recognition, duration of medical treatments or drug repurposing. Formally, this task is known as named entity recognition (NER), meaning automatically identifying mentions of predefined entities of interest in running text. In the domain of medical texts, for chemical entity recognition (CER), techniques based on hand-crafted rules and graph-based models can provide adequate performance. In the recent years, the field of natural language processing has mainly pivoted to deep learning and state-of-the-art results for most tasks involving natural language are usually obtained with artificial neural networks. Competitive resources for drug name recognition in English medical texts are already available and heavily used, while for other languages such as Spanish these tools, although clearly needed were missing. In this work, we adapt an existing neural NER system, NeuroNER, to the particular domain of Spanish clinical case texts, and extend the neural network to be able to take into account additional features apart from the plain text. NeuroNER can be considered a competitive baseline system for Spanish drug and CER promoted by the Spanish national plan for the advancement of language technologies (Plan TL).

Deep Learning Approaches to RUL Prediction of Lithium-ion Batteries (딥러닝을 이용한 리튬이온 배터리 잔여 유효수명 예측)

  • Jung, Sang-Jin;Hur, Jang-Wook
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.19 no.12
    • /
    • pp.21-27
    • /
    • 2020
  • Lithium-ion batteries are the heart of energy-storing devices and electric vehicles. Owing to their superior qualities, such as high capacity and energy efficiency, they have become quite popular, resulting in an increased demand for failure/damage prevention and useable life maximization. To prevent failure in Lithium-ion batteries, improve their reliability, and ensure productivity, prognosticative measures such as condition monitoring through sensors, condition assessment for failure detection, and remaining useful life prediction through data-driven prognostics and health management approaches have become important topics for research. In this study, the residual useful life of Lithium-ion batteries was predicted using two efficient artificial recurrent neural networks-ong short-term memory (LSTM) and gated recurrent unit (GRU). The proposed approaches were compared for prognostics accuracy and cost-efficiency. It was determined that LSTM showed slightly higher accuracy, whereas GRUs have a computational advantage.

A Predictive Model to identify possible affected Bipolar disorder students using Naive Baye's, Random Forest and SVM machine learning techniques of data mining and Building a Sequential Deep Learning Model using Keras

  • Peerbasha, S.;Surputheen, M. Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.5
    • /
    • pp.267-274
    • /
    • 2021
  • Medical care practices include gathering a wide range of student data that are with manic episodes and depression which would assist the specialist with diagnosing a health condition of the students correctly. In this way, the instructors of the specific students will also identify those students and take care of them well. The data which we collected from the students could be straightforward indications seen by them. The artificial intelligence has been utilized with Naive Baye's classification, Random forest classification algorithm, SVM algorithm to characterize the datasets which we gathered to check whether the student is influenced by Bipolar illness or not. Performance analysis of the disease data for the algorithms used is calculated and compared. Also, a sequential deep learning model is builded using Keras. The consequences of the simulations show the efficacy of the grouping techniques on a dataset, just as the nature and complexity of the dataset utilized.

Scene Text Recognition Performance Improvement through an Add-on of an OCR based Classifier (OCR 엔진 기반 분류기 애드온 결합을 통한 이미지 내부 텍스트 인식 성능 향상)

  • Chae, Ho-Yeol;Seok, Ho-Sik
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1086-1092
    • /
    • 2020
  • An autonomous agent for real world should be able to recognize text in scenes. With the advancement of deep learning, various DNN models have been utilized for transformation, feature extraction, and predictions. However, the existing state-of-the art STR (Scene Text Recognition) engines do not achieve the performance required for real world applications. In this paper, we introduce a performance-improvement method through an add-on composed of an OCR (Optical Character Recognition) engine and a classifier for STR engines. On instances from IC13 and IC15 datasets which a STR engine failed to recognize, our method recognizes 10.92% of unrecognized characters.

Development of Surface Weather Forecast Model by using LSTM Machine Learning Method (기계학습의 LSTM을 적용한 지상 기상변수 예측모델 개발)

  • Hong, Sungjae;Kim, Jae Hwan;Choi, Dae Sung;Baek, Kanghyun
    • Atmosphere
    • /
    • v.31 no.1
    • /
    • pp.73-83
    • /
    • 2021
  • Numerical weather prediction (NWP) models play an essential role in predicting weather factors, but using them is challenging due to various factors. To overcome the difficulties of NWP models, deep learning models have been deployed in weather forecasting by several recent studies. This study adapts long short-term memory (LSTM), which demonstrates remarkable performance in time-series prediction. The combination of LSTM model input of meteorological features and activation functions have a significant impact on the performance therefore, the results from 5 combinations of input features and 4 activation functions are analyzed in 9 Automated Surface Observing System (ASOS) stations corresponding to cities/islands/mountains. The optimized LSTM model produces better performance within eight forecast hours than Local Data Assimilation and Prediction System (LDAPS) operated by Korean meteorological administration. Therefore, this study illustrates that this LSTM model can be usefully applied to very short-term weather forecasting, and further studies about CNN-LSTM model with 2-D spatial convolution neural network (CNN) coupled in LSTM are required for improvement.

A Robust Energy Consumption Forecasting Model using ResNet-LSTM with Huber Loss

  • Albelwi, Saleh
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.7
    • /
    • pp.301-307
    • /
    • 2022
  • Energy consumption has grown alongside dramatic population increases. Statistics show that buildings in particular utilize a significant amount of energy, worldwide. Because of this, building energy prediction is crucial to best optimize utilities' energy plans and also create a predictive model for consumers. To improve energy prediction performance, this paper proposes a ResNet-LSTM model that combines residual networks (ResNets) and long short-term memory (LSTM) for energy consumption prediction. ResNets are utilized to extract complex and rich features, while LSTM has the ability to learn temporal correlation; the dense layer is used as a regression to forecast energy consumption. To make our model more robust, we employed Huber loss during the optimization process. Huber loss obtains high efficiency by handling minor errors quadratically. It also takes the absolute error for large errors to increase robustness. This makes our model less sensitive to outlier data. Our proposed system was trained on historical data to forecast energy consumption for different time series. To evaluate our proposed model, we compared our model's performance with several popular machine learning and deep learning methods such as linear regression, neural networks, decision tree, and convolutional neural networks, etc. The results show that our proposed model predicted energy consumption most accurately.

Reproduction strategy of radiation data with compensation of data loss using a deep learning technique

  • Cho, Woosung;Kim, Hyeonmin;Kim, Duckhyun;Kim, SongHyun;Kwon, Inyong
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2229-2236
    • /
    • 2021
  • In nuclear-related facilities, such as nuclear power plants, research reactors, accelerators, and nuclear waste storage sites, radiation detection, and mapping are required to prevent radiation overexposure. Sensor network systems consisting of radiation sensor interfaces and wxireless communication units have become promising tools that can be used for data collection of radiation detection that can in turn be used to draw a radiation map. During data collection, malfunctions in some of the sensors can occasionally occur due to radiation effects, physical damage, network defects, sensor loss, or other reasons. This paper proposes a reproduction strategy for radiation maps using a U-net model to compensate for the loss of radiation detection data. To perform machine learning and verification, 1,561 simulations and 417 measured data of a sensor network were performed. The reproduction results show an accuracy of over 90%. The proposed strategy can offer an effective method that can be used to resolve the data loss problem for conventional sensor network systems and will specifically contribute to making initial responses with preserved data and without the high cost of radiation leak accidents at nuclear facilities.