• 제목/요약/키워드: DNN-FCL

검색결과 2건 처리시간 0.015초

Deep neural networks trained by the adaptive momentum-based technique for stability simulation of organic solar cells

  • Xu, Peng;Qin, Xiao;Zhu, Honglei
    • Structural Engineering and Mechanics
    • /
    • 제83권2호
    • /
    • pp.259-272
    • /
    • 2022
  • The branch of electronics that uses an organic solar cell or conductive organic polymers in order to yield electricity from sunlight is called photovoltaic. Regarding this crucial issue, an artificial intelligence-based predictor is presented to investigate the vibrational behavior of the organic solar cell. In addition, the generalized differential quadrature method (GDQM) is utilized to extract the results. The validation examination is done to confirm the credibility of the results. Then, the deep neural network with fully connected layers (DNN-FCL) is trained by means of Adam optimization on the dataset whose members are the vibration response of the design-points. By determining the optimum values for the biases along with weights of DNN-FCL, one can predict the vibrational characteristics of any organic solar cell by knowing the properties defined as the inputs of the mentioned DNN. To assess the ability of the proposed artificial intelligence-based model in prediction of the vibrational response of the organic solar cell, the authors monitored the mean squared error in different steps of the training the DNN-FCL and they observed that the convergency of the results is excellent.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • 제9권3호
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.