• Title/Summary/Keyword: neural net

Search Result 750, Processing Time 0.026 seconds

Relative humidity prediction of a leakage area for small RCS leakage quantification by applying the Bi-LSTM neural networks

  • Sang Hyun Lee;Hye Seon Jo;Man Gyun Na
    • Nuclear Engineering and Technology
    • /
    • v.56 no.5
    • /
    • pp.1725-1732
    • /
    • 2024
  • In nuclear power plants, reactor coolant leakage can occur due to various reasons. Early detection of leaks is crucial for maintaining the safety of nuclear power plants. Currently, a detection system is being developed in Korea to identify reactor coolant system (RCS) leakage of less than 0.5 gpm. Typically, RCS leaks are detected by monitoring temperature, humidity, and radioactivity in the containment, and a water level in the sump. However, detecting small leaks proves challenging because the resulting changes in the containment humidity and temperature, and the sump water level are minimal. To address these issues and improve leak detection speed, it is necessary to quantify the leaks and develop an artificial intelligence-based leak detection system. In this study, we employed bidirectional long short-term memory, which are types of neural networks used in artificial intelligence, to predict the relative humidity in the leakage area for leak quantification. Additionally, an optimization technique was implemented to reduce learning time and enhance prediction performance. Through evaluation of the developed artificial intelligence model's prediction accuracy, we expect it to be valuable for future leak detection systems by accurately predicting the relative humidity in a leakage area.

α-feature map scaling for raw waveform speaker verification (α-특징 지도 스케일링을 이용한 원시파형 화자 인증)

  • Jung, Jee-weon;Shim, Hye-jin;Kim, Ju-ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.441-446
    • /
    • 2020
  • In this paper, we propose the α-Feature Map Scaling (α-FMS) method which extends the FMS method that was designed to enhance the discriminative power of feature maps of deep neural networks in Speaker Verification (SV) systems. The FMS derives a scale vector from a feature map and then adds or multiplies them to the features, or sequentially apply both operations. However, the FMS method not only uses an identical scale vector for both addition and multiplication, but also has a limitation that it can only add a value between zero and one in case of addition. In this study, to overcome these limitations, we propose α-FMS to add a trainable parameter α to the feature map element-wise, and then multiply a scale vector. We compare the performance of the two methods: the one where α is a scalar, and the other where it is a vector. Both α-FMS methods are applied after each residual block of the deep neural network. The proposed system using the α-FMS methods are trained using the RawNet2 and tested using the VoxCeleb1 evaluation set. The result demonstrates an equal error rate of 2.47 % and 2.31 % for the two α-FMS methods respectively.

A Study on Similar Trademark Search Model Using Convolutional Neural Networks (합성곱 신경망(Convolutional Neural Network)을 활용한 지능형 유사상표 검색 모형 개발)

  • Yoon, Jae-Woong;Lee, Suk-Jun;Song, Chil-Yong;Kim, Yeon-Sik;Jung, Mi-Young;Jeong, Sang-Il
    • Management & Information Systems Review
    • /
    • v.38 no.3
    • /
    • pp.55-80
    • /
    • 2019
  • Recently, many companies improving their management performance by building a powerful brand value which is recognized for trademark rights. However, as growing up the size of online commerce market, the infringement of trademark rights is increasing. According to various studies and reports, cases of foreign and domestic companies infringing on their trademark rights are increased. As the manpower and the cost required for the protection of trademark are enormous, small and medium enterprises(SMEs) could not conduct preliminary investigations to protect their trademark rights. Besides, due to the trademark image search service does not exist, many domestic companies have a problem that investigating huge amounts of trademarks manually when conducting preliminary investigations to protect their rights of trademark. Therefore, we develop an intelligent similar trademark search model to reduce the manpower and cost for preliminary investigation. To measure the performance of the model which is developed in this study, test data selected by intellectual property experts was used, and the performance of ResNet V1 101 was the highest. The significance of this study is as follows. The experimental results empirically demonstrate that the image classification algorithm shows high performance not only object recognition but also image retrieval. Since the model that developed in this study was learned through actual trademark image data, it is expected that it can be applied in the real industrial environment.

Construction of a Bark Dataset for Automatic Tree Identification and Developing a Convolutional Neural Network-based Tree Species Identification Model (수목 동정을 위한 수피 분류 데이터셋 구축과 합성곱 신경망 기반 53개 수종의 동정 모델 개발)

  • Kim, Tae Kyung;Baek, Gyu Heon;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.2
    • /
    • pp.155-164
    • /
    • 2021
  • Many studies have been conducted on developing automatic plant identification algorithms using machine learning to various plant features, such as leaves and flowers. Unlike other plant characteristics, barks show only little change regardless of the season and are maintained for a long period. Nevertheless, barks show a complex shape with a large variation depending on the environment, and there are insufficient materials that can be utilized to train algorithms. Here, in addition to the previously published bark image dataset, BarkNet v.1.0, images of barks were collected, and a dataset consisting of 53 tree species that can be easily observed in Korea was presented. A convolutional neural network (CNN) was trained and tested on the dataset, and the factors that interfere with the model's performance were identified. For CNN architecture, VGG-16 and 19 were utilized. As a result, VGG-16 achieved 90.41% and VGG-19 achieved 92.62% accuracy. When tested on new tree images that do not exist in the original dataset but belong to the same genus or family, it was confirmed that more than 80% of cases were successfully identified as the same genus or family. Meanwhile, it was found that the model tended to misclassify when there were distracting features in the image, including leaves, mosses, and knots. In these cases, we propose that random cropping and classification by majority votes are valid for improving possible errors in training and inferences.

Comparative analysis of Machine-Learning Based Models for Metal Surface Defect Detection (머신러닝 기반 금속외관 결함 검출 비교 분석)

  • Lee, Se-Hun;Kang, Seong-Hwan;Shin, Yo-Seob;Choi, Oh-Kyu;Kim, Sijong;Kang, Jae-Mo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.6
    • /
    • pp.834-841
    • /
    • 2022
  • Recently, applying artificial intelligence technologies in various fields of production has drawn an upsurge of research interest due to the increase for smart factory and artificial intelligence technologies. A great deal of effort is being made to introduce artificial intelligence algorithms into the defect detection task. Particularly, detection of defects on the surface of metal has a higher level of research interest compared to other materials (wood, plastics, fibers, etc.). In this paper, we compare and analyze the speed and performance of defect classification by combining machine learning techniques (Support Vector Machine, Softmax Regression, Decision Tree) with dimensionality reduction algorithms (Principal Component Analysis, AutoEncoders) and two convolutional neural networks (proposed method, ResNet). To validate and compare the performance and speed of the algorithms, we have adopted two datasets ((i) public dataset, (ii) actual dataset), and on the basis of the results, the most efficient algorithm is determined.

Face Tracking Method based on Neural Oscillatory Network Using Color Information (컬러 정보를 이용한 신경 진동망 기반 얼굴추적 방법)

  • Hwang, Yong-Won;Oh, Sang-Rok;You, Bum-Jae;Lee, Ji-Yong;Park, Mig-Non;Jeong, Mun-Ho
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.48 no.2
    • /
    • pp.40-46
    • /
    • 2011
  • This paper proposes a real-time face detection and tracking system that uses neural oscillators which can be applied to access regulation system or control systems of user authentication as well as a new algorithm. We study a way to track faces using the neural oscillatory network which imitates the artificial neural net of information handing ability of human and animals, and biological movement characteristic of a singular neuron. The system that is suggested in this paper can broadly be broken into two stages of process. The first stage is the process of face extraction, which involves the acquisition of real-time RGB24bit color video delivering with the use of a cheap webcam. LEGION(Locally Excitatory Globally Inhibitory)algorithm is suggested as the face extraction method to be preceded for face tracking. The second stage is a method for face tracking by discovering the leader neuron that has the greatest connection strength amongst neighbor neuron of extracted face area. Along with the suggested method, the necessary element of face track such as stability as well as scale problem can be resolved.

Implementation of Finger Vein Authentication System based on High-performance CNN (고성능 CNN 기반 지정맥 인증 시스템 구현)

  • Kim, Kyeong-Rae;Choi, Hong-Rak;Kim, Kyung-Seok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.5
    • /
    • pp.197-202
    • /
    • 2021
  • Biometric technology using finger veins is receiving a lot of attention due to its high security, convenience and accuracy. And the recent development of deep learning technology has improved the processing speed and accuracy for authentication. However, the training data is a subset of real data not in a certain order or method and the results are not constant. so the amount of data and the complexity of the artificial neural network must be considered. In this paper, the deep learning model of Inception-Resnet-v2 was used to improve the high accuracy of the finger vein recognizer and the performance of the authentication system, We compared and analyzed the performance of the deep learning model of DenseNet-201. The simulations used data from MMCBNU_6000 of Jeonbuk National University and finger vein images taken directly. There is no preprocessing for the image in the finger vein authentication system, and the results are checked through EER.

Application of CNN for Fish Species Classification (어종 분류를 위한 CNN의 적용)

  • Park, Jin-Hyun;Hwang, Kwang-Bok;Park, Hee-Mun;Choi, Young-Kiu
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.39-46
    • /
    • 2019
  • In this study, before system development for the elimination of foreign fish species, we propose an algorithm to classify fish species by training fish images with CNN. The raw data for CNN learning were directly captured images for each species, Dataset 1 increases the number of images to improve the classification of fish species and Dataset 2 realizes images close to natural environment are constructed and used as training and test data. The classification performance of four CNNs are over 99.97% for dataset 1 and 99.5% for dataset 2, in particular, we confirm that the learned CNN using Data Set 2 has satisfactory performance for fish images similar to the natural environment. And among four CNNs, AlexNet achieves satisfactory performance, and this has also the shortest execution time and training time, we confirm that it is the most suitable structure to develop the system for the elimination of foreign fish species.

Deep learning based Person Re-identification with RGB-D sensors

  • Kim, Min;Park, Dong-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.3
    • /
    • pp.35-42
    • /
    • 2021
  • In this paper, we propose a deep learning-based person re-identification method using a three-dimensional RGB-Depth Xtion2 camera considering joint coordinates and dynamic features(velocity, acceleration). The main idea of the proposed identification methodology is to easily extract gait data such as joint coordinates, dynamic features with an RGB-D camera and automatically identify gait patterns through a self-designed one-dimensional convolutional neural network classifier(1D-ConvNet). The accuracy was measured based on the F1 Score, and the influence was measured by comparing the accuracy with the classifier model (JC) that did not consider dynamic characteristics. As a result, our proposed classifier model in the case of considering the dynamic characteristics(JCSpeed) showed about 8% higher F1-Score than JC.

A Study on the Performance of Enhanced Deep Fully Convolutional Neural Network Algorithm for Image Object Segmentation in Autonomous Driving Environment (자율주행 환경에서 이미지 객체 분할을 위한 강화된 DFCN 알고리즘 성능연구)

  • Kim, Yeonggwang;Kim, Jinsul
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.9-16
    • /
    • 2020
  • Recently, various studies are being conducted to integrate Image Segmentation into smart factory industries and autonomous driving fields. In particular, Image Segmentation systems using deep learning algorithms have been researched and developed enough to learn from large volumes of data with higher accuracy. In order to use image segmentation in the autonomous driving sector, sufficient amount of learning is needed with large amounts of data and the streaming environment that processes drivers' data in real time is important for the accuracy of safe operation through highways and child protection zones. Therefore, we proposed a novel DFCN algorithm that enhanced existing FCN algorithms that could be applied to various road environments, demonstrated that the performance of the DFCN algorithm improved 1.3% in terms of "loss" value compared to the previous FCN algorithms. Moreover, the proposed DFCN algorithm was applied to the existing U-Net algorithm to maintain the information of frequencies in the image to produce better results, resulting in a better performance than the classical FCN algorithm in the autonomous environment.