• Title/Summary/Keyword: Fully connected network

Search Result 142, Processing Time 0.025 seconds

A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra

  • Galib, S.M.;Bhowmik, P.K.;Avachat, A.V.;Lee, H.K.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4072-4079
    • /
    • 2021
  • This article presents a study on the state-of-the-art methods for automated radioactive material detection and identification, using gamma-ray spectra and modern machine learning methods. The recent developments inspired this in deep learning algorithms, and the proposed method provided better performance than the current state-of-the-art models. Machine learning models such as: fully connected, recurrent, convolutional, and gradient boosted decision trees, are applied under a wide variety of testing conditions, and their advantage and disadvantage are discussed. Furthermore, a hybrid model is developed by combining the fully-connected and convolutional neural network, which shows the best performance among the different machine learning models. These improvements are represented by the model's test performance metric (i.e., F1 score) of 93.33% with an improvement of 2%-12% than the state-of-the-art model at various conditions. The experimental results show that fusion of classical neural networks and modern deep learning architecture is a suitable choice for interpreting gamma spectra data where real-time and remote detection is necessary.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • v.9 no.3
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

Facial Expression Classification Using Deep Convolutional Neural Network (깊은 Convolutional Neural Network를 이용한 얼굴표정 분류 기법)

  • Choi, In-kyu;Song, Hyok;Lee, Sangyong;Yoo, Jisang
    • Journal of Broadcast Engineering
    • /
    • v.22 no.2
    • /
    • pp.162-172
    • /
    • 2017
  • In this paper, we propose facial expression recognition using CNN (Convolutional Neural Network), one of the deep learning technologies. To overcome the disadvantages of existing facial expression databases, various databases are used. In the proposed technique, we construct six facial expression data sets such as 'expressionless', 'happiness', 'sadness', 'angry', 'surprise', and 'disgust'. Pre-processing and data augmentation techniques are also applied to improve efficient learning and classification performance. In the existing CNN structure, the optimal CNN structure that best expresses the features of six facial expressions is found by adjusting the number of feature maps of the convolutional layer and the number of fully-connected layer nodes. Experimental results show that the proposed scheme achieves the highest classification performance of 96.88% while it takes the least time to pass through the CNN structure compared to other models.

Performance Improvement Method of Convolutional Neural Network Using Agile Activation Function (민첩한 활성함수를 이용한 합성곱 신경망의 성능 향상)

  • Kong, Na Young;Ko, Young Min;Ko, Sun Woo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.7
    • /
    • pp.213-220
    • /
    • 2020
  • The convolutional neural network is composed of convolutional layers and fully connected layers. The nonlinear activation function is used in each layer of the convolutional layer and the fully connected layer. The activation function being used in a neural network is a function that simulates the method of transmitting information in a neuron that can transmit a signal and not send a signal if the input signal is above a certain criterion when transmitting a signal between neurons. The conventional activation function does not have a relationship with the loss function, so the process of finding the optimal solution is slow. In order to improve this, an agile activation function that generalizes the activation function is proposed. The agile activation function can improve the performance of the deep neural network in a way that selects the optimal agile parameter through the learning process using the primary differential coefficient of the loss function for the agile parameter in the backpropagation process. Through the MNIST classification problem, we have identified that agile activation functions have superior performance over conventional activation functions.

Deep Learning based Estimation of Depth to Bearing Layer from In-situ Data (딥러닝 기반 국내 지반의 지지층 깊이 예측)

  • Jang, Young-Eun;Jung, Jaeho;Han, Jin-Tae;Yu, Yonggyun
    • Journal of the Korean Geotechnical Society
    • /
    • v.38 no.3
    • /
    • pp.35-42
    • /
    • 2022
  • The N-value from the Standard Penetration Test (SPT), which is one of the representative in-situ test, is an important index that provides basic geological information and the depth of the bearing layer for the design of geotechnical structures. In the aspect of time and cost-effectiveness, there is a need to carry out a representative sampling test. However, the various variability and uncertainty are existing in the soil layer, so it is difficult to grasp the characteristics of the entire field from the limited test results. Thus the spatial interpolation techniques such as Kriging and IDW (inverse distance weighted) have been used for predicting unknown point from existing data. Recently, in order to increase the accuracy of interpolation results, studies that combine the geotechnics and deep learning method have been conducted. In this study, based on the SPT results of about 22,000 holes of ground survey, a comparative study was conducted to predict the depth of the bearing layer using deep learning methods and IDW. The average error among the prediction results of the bearing layer of each analysis model was 3.01 m for IDW, 3.22 m and 2.46 m for fully connected network and PointNet, respectively. The standard deviation was 3.99 for IDW, 3.95 and 3.54 for fully connected network and PointNet. As a result, the point net deep learing algorithm showed improved results compared to IDW and other deep learning method.

Study on Detection Technique for Sea Fog by using CCTV Images and Convolutional Neural Network (CCTV 영상과 합성곱 신경망을 활용한 해무 탐지 기법 연구)

  • Kim, Na-Kyeong;Bak, Su-Ho;Jeong, Min-Ji;Hwang, Do-Hyun;Enkhjargal, Unuzaya;Park, Mi-So;Kim, Bo-Ram;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1081-1088
    • /
    • 2020
  • In this paper, the method of detecting sea fog through CCTV image is proposed based on convolutional neural networks. The study data randomly extracted 1,0004 images, sea-fog and not sea-fog, from a total of 11 ports or beaches (Busan Port, Busan New Port, Pyeongtaek Port, Incheon Port, Gunsan Port, Daesan Port, Mokpo Port, Yeosu Gwangyang Port, Ulsan Port, Pohang Port, and Haeundae Beach) based on 1km of visibility. 80% of the total 1,0004 datasets were extracted and used for learning the convolutional neural network model. The model has 16 convolutional layers and 3 fully connected layers, and a convolutional neural network that performs Softmax classification in the last fully connected layer is used. Model accuracy evaluation was performed using the remaining 20%, and the accuracy evaluation result showed a classification accuracy of about 96%.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Design of neural network based ALE for QRS enhancement (QRS 파의 증대를 위한 신경망 ALE 설계)

  • 원상철;박종철;최한고
    • Proceedings of the Korea Institute of Convergence Signal Processing
    • /
    • 2000.08a
    • /
    • pp.217-220
    • /
    • 2000
  • This paper describes the application of a neural network based adaptive line enhancer (ALE) for enhancement of the weak QRS complex corrupted with background noise. Modified fully-connected recurrent neural network is used as a nonlinear adaptive filter in the ALE. The connecting weights between network nodes as well as the parameters of the node activation function are updated at each iteration using the gradient descent algorithm. The real ECG signal buried with moderate and severe background noise is applied to the ALE. Simulation results show that the neural network based ALE performs well the enhancement of the QRS complex from noisy ECG signals.

  • PDF

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.