• Title/Summary/Keyword: Fully connected layers

Search Result 52, Processing Time 0.022 seconds

A study on estimating the interlayer boundary of the subsurface using a artificial neural network with electrical impedance tomography

  • Sharma, Sunam Kumar;Khambampati, Anil Kumar;Kim, Kyung Youn
    • Journal of IKEEE
    • /
    • v.25 no.4
    • /
    • pp.650-663
    • /
    • 2021
  • Subsurface topology estimation is an important factor in the geophysical survey. Electrical impedance tomography is one of the popular methods used for subsurface imaging. The EIT inverse problem is highly nonlinear and ill-posed; therefore, reconstructed conductivity distribution suffers from low spatial resolution. The subsurface region can be approximated as piece-wise separate regions with constant conductivity in each region; therefore, the conductivity estimation problem is transformed to estimate the shape and location of the layer boundary interface. Each layer interface boundary is treated as an open boundary that is described using front points. The subsurface domain contains multi-layers with very complex configurations, and, in such situations, conventional methods such as the modified Newton Raphson method fail to provide the desired solution. Therefore, in this work, we have implemented a 7-layer artificial neural network (ANN) as an inverse problem algorithm to estimate the front points that describe the multi-layer interface boundaries. An ANN model consisting of input, output, and five fully connected hidden layers are trained for interlayer boundary reconstruction using training data that consists of pairs of voltage measurements of the subsurface domain with three-layer configuration and the corresponding front points of interface boundaries. The results from the proposed ANN model are compared with the gravitational search algorithm (GSA) for interlayer boundary estimation, and the results show that ANN is successful in estimating the layer boundaries with good accuracy.

Image Retrieval Based on the Weighted and Regional Integration of CNN Features

  • Liao, Kaiyang;Fan, Bing;Zheng, Yuanlin;Lin, Guangfeng;Cao, Congjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.894-907
    • /
    • 2022
  • The features extracted by convolutional neural networks are more descriptive of images than traditional features, and their convolutional layers are more suitable for retrieving images than are fully connected layers. The convolutional layer features will consume considerable time and memory if used directly to match an image. Therefore, this paper proposes a feature weighting and region integration method for convolutional layer features to form global feature vectors and subsequently use them for image matching. First, the 3D feature of the last convolutional layer is extracted, and the convolutional feature is subsequently weighted again to highlight the edge information and position information of the image. Next, we integrate several regional eigenvectors that are processed by sliding windows into a global eigenvector. Finally, the initial ranking of the retrieval is obtained by measuring the similarity of the query image and the test image using the cosine distance, and the final mean Average Precision (mAP) is obtained by using the extended query method for rearrangement. We conduct experiments using the Oxford5k and Paris6k datasets and their extended datasets, Paris106k and Oxford105k. These experimental results indicate that the global feature extracted by the new method can better describe an image.

Helmet and Mask Classification for Personnel Safety Using a Deep Learning (딥러닝 기반 직원 안전용 헬멧과 마스크 분류)

  • Shokhrukh, Bibalaev;Kim, Kang-Chul
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.3
    • /
    • pp.473-482
    • /
    • 2022
  • Wearing a mask is also necessary to limit the risk of infection in today's era of COVID-19 and wearing a helmet is inevitable for the safety of personnel who works in a dangerous working environment such as construction sites. This paper proposes an effective deep learning model, HelmetMask-Net, to classify both Helmet and Mask. The proposed HelmetMask-Net is based on CNN which consists of data processing, convolution layers, max pooling layers and fully connected layers with four output classifications, and 4 classes for Helmet, Mask, Helmet & Mask, and no Helmet & no Mask are classified. The proposed HelmatMask-Net has been chosen with 2 convolutional layers and AdaGrad optimizer by various simulations for accuracy, optimizer and the number of hyperparameters. Simulation results show the accuracy of 99% and the best performance compared to other models. The results of this paper would enhance the safety of personnel in this era of COVID-19.

Impact of Fungus on Egg Shell of Tropical Tasar Silk Worm, Antheraea mylitta: An Ultra-structural Approach

  • Barsagade, Deepak Dewaji;Pankule, Sushama Dilip;Tembhare, Dnyaneshwar Bapuji
    • International Journal of Industrial Entomology and Biomaterials
    • /
    • v.18 no.2
    • /
    • pp.77-82
    • /
    • 2009
  • The egg shell of the tropical tasar silkworm, Antheraea mylitta is formed from the substances secreted by the follicular epithelium during the late vitellogenic stage. TEM study reveals the inner travecular and outer lamellar layer of chorion. The travecular layer is composed of the innermost wax layer, inner and outer chorionic layer. The inner and outer chorionic layers are connected to each other by vertical pillers forming of cavities. The lamellar layer is perforated by the aeropyles. SEM study reveals the differentiation of an anterior surface of the egg shell into four zones-micropylar, edge, aeropyles crown and disc zone. In the mycosis infected eggs the aeropyles and egg-shell surface are fully packed with the hyphae of the fungus, Aspergillus sydowi blocking of plastron respiration and causing the death of developing embryo so that mycosis infected eggs become sterile.

Machine Learning-Based EEG Classification for Assisting the Diagnosis of ADHD in Children (아동의 ADHD 진단 보조를 위한 기계 학습 기반의 뇌전도 분류)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.10
    • /
    • pp.1336-1345
    • /
    • 2021
  • Attention Deficit Hyperactivity Disorder (ADHD) is one of the most common neurological disorders in children. The diagnosis of ADHD in children is based on the interviews and observation reports of parents or teachers who have stayed with them. Since this approach cannot avoid long observation time and the bias of observers, another approach based on Electroencephalography(EEG) is emerging. The goal of this study is to develop an assistive tool for diagnosing ADHD by EEG classification. This study explores the frequency bands of EEG and extracts the implied features in them by using the proposed CNN. The CNN architecture has three Convolution-MaxPooling blocks and two fully connected layers. As a result of the experiment, the 30-60 Hz gamma band showed dominant characteristics in identifying EEG, and when other frequency bands were added to the gamma band, the EEG classification performance was improved. They also show that the proposed CNN is effective in detecting ADHD in children.

GAN-based Color Palette Extraction System by Chroma Fine-tuning with Reinforcement Learning

  • Kim, Sanghyuk;Kang, Suk-Ju
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.1
    • /
    • pp.125-129
    • /
    • 2021
  • As the interest of deep learning, techniques to control the color of images in image processing field are evolving together. However, there is no clear standard for color, and it is not easy to find a way to represent only the color itself like the color-palette. In this paper, we propose a novel color palette extraction system by chroma fine-tuning with reinforcement learning. It helps to recognize the color combination to represent an input image. First, we use RGBY images to create feature maps by transferring the backbone network with well-trained model-weight which is verified at super resolution convolutional neural networks. Second, feature maps are trained to 3 fully connected layers for the color-palette generation with a generative adversarial network (GAN). Third, we use the reinforcement learning method which only changes chroma information of the GAN-output by slightly moving each Y component of YCbCr color gamut of pixel values up and down. The proposed method outperforms existing color palette extraction methods as given the accuracy of 0.9140.

Transfer Learning using Multiple ConvNet Layers Activation Features with Principal Component Analysis for Image Classification (전이학습 기반 다중 컨볼류션 신경망 레이어의 활성화 특징과 주성분 분석을 이용한 이미지 분류 방법)

  • Byambajav, Batkhuu;Alikhanov, Jumabek;Fang, Yang;Ko, Seunghyun;Jo, Geun Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.205-225
    • /
    • 2018
  • Convolutional Neural Network (ConvNet) is one class of the powerful Deep Neural Network that can analyze and learn hierarchies of visual features. Originally, first neural network (Neocognitron) was introduced in the 80s. At that time, the neural network was not broadly used in both industry and academic field by cause of large-scale dataset shortage and low computational power. However, after a few decades later in 2012, Krizhevsky made a breakthrough on ILSVRC-12 visual recognition competition using Convolutional Neural Network. That breakthrough revived people interest in the neural network. The success of Convolutional Neural Network is achieved with two main factors. First of them is the emergence of advanced hardware (GPUs) for sufficient parallel computation. Second is the availability of large-scale datasets such as ImageNet (ILSVRC) dataset for training. Unfortunately, many new domains are bottlenecked by these factors. For most domains, it is difficult and requires lots of effort to gather large-scale dataset to train a ConvNet. Moreover, even if we have a large-scale dataset, training ConvNet from scratch is required expensive resource and time-consuming. These two obstacles can be solved by using transfer learning. Transfer learning is a method for transferring the knowledge from a source domain to new domain. There are two major Transfer learning cases. First one is ConvNet as fixed feature extractor, and the second one is Fine-tune the ConvNet on a new dataset. In the first case, using pre-trained ConvNet (such as on ImageNet) to compute feed-forward activations of the image into the ConvNet and extract activation features from specific layers. In the second case, replacing and retraining the ConvNet classifier on the new dataset, then fine-tune the weights of the pre-trained network with the backpropagation. In this paper, we focus on using multiple ConvNet layers as a fixed feature extractor only. However, applying features with high dimensional complexity that is directly extracted from multiple ConvNet layers is still a challenging problem. We observe that features extracted from multiple ConvNet layers address the different characteristics of the image which means better representation could be obtained by finding the optimal combination of multiple ConvNet layers. Based on that observation, we propose to employ multiple ConvNet layer representations for transfer learning instead of a single ConvNet layer representation. Overall, our primary pipeline has three steps. Firstly, images from target task are given as input to ConvNet, then that image will be feed-forwarded into pre-trained AlexNet, and the activation features from three fully connected convolutional layers are extracted. Secondly, activation features of three ConvNet layers are concatenated to obtain multiple ConvNet layers representation because it will gain more information about an image. When three fully connected layer features concatenated, the occurring image representation would have 9192 (4096+4096+1000) dimension features. However, features extracted from multiple ConvNet layers are redundant and noisy since they are extracted from the same ConvNet. Thus, a third step, we will use Principal Component Analysis (PCA) to select salient features before the training phase. When salient features are obtained, the classifier can classify image more accurately, and the performance of transfer learning can be improved. To evaluate proposed method, experiments are conducted in three standard datasets (Caltech-256, VOC07, and SUN397) to compare multiple ConvNet layer representations against single ConvNet layer representation by using PCA for feature selection and dimension reduction. Our experiments demonstrated the importance of feature selection for multiple ConvNet layer representation. Moreover, our proposed approach achieved 75.6% accuracy compared to 73.9% accuracy achieved by FC7 layer on the Caltech-256 dataset, 73.1% accuracy compared to 69.2% accuracy achieved by FC8 layer on the VOC07 dataset, 52.2% accuracy compared to 48.7% accuracy achieved by FC7 layer on the SUN397 dataset. We also showed that our proposed approach achieved superior performance, 2.8%, 2.1% and 3.1% accuracy improvement on Caltech-256, VOC07, and SUN397 dataset respectively compare to existing work.

Improving Test Accuracy on the MNIST Dataset using a Simple CNN with Batch Normalization

  • Seungbin Lee;Jungsoo Rhee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.9
    • /
    • pp.1-7
    • /
    • 2024
  • In this paper, we proposes a Convolutional Neural Networks(CNN) equipped with Batch Normalization(BN) for handwritten digit recognition training the MNIST dataset. Aiming to surpass the performance of LeNet-5 by LeCun et al., a 6-layer neural network was designed. The proposed model processes 28×28 pixel images through convolution, Max Pooling, and Fully connected layers, with the batch normalization to improve learning stability and performance. The experiment utilized 60,000 training images and 10,000 test images, applying the Momentum optimization algorithm. The model configuration used 30 filters with a 5×5 filter size, padding 0, stride 1, and ReLU as activation function. The training process was set with a mini-batch size of 100, 20 epochs in total, and a learning rate of 0.1. As a result, the proposed model achieved a test accuracy of 99.22%, surpassing LeNet-5's 99.05%, and recorded an F1-score of 0.9919, demonstrating the model's performance. Moreover, the 6-layer model proposed in this paper emphasizes model efficiency with a simpler structure compared to LeCun et al.'s LeNet-5 (7-layer model) and the model proposed by Ji, Chun and Kim (10-layer model). The results of this study show potential for application in real industrial applications such as AI vision inspection systems. It is expected to be effectively applied in smart factories, particularly in determining the defective status of parts.

Performance Improvement of Convolutional Neural Network for Pulmonary Nodule Detection (폐 결절 검출을 위한 합성곱 신경망의 성능 개선)

  • Kim, HanWoong;Kim, Byeongnam;Lee, JeeEun;Jang, Won Seuk;Yoo, Sun K.
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.5
    • /
    • pp.237-241
    • /
    • 2017
  • Early detection of the pulmonary nodule is important for diagnosis and treatment of lung cancer. Recently, CT has been used as a screening tool for lung nodule detection. And, it has been reported that computer aided detection(CAD) systems can improve the accuracy of the radiologist in detection nodules on CT scan. The previous study has been proposed a method using Convolutional Neural Network(CNN) in Lung CAD system. But the proposed model has a limitation in accuracy due to its sparse layer structure. Therefore, we propose a Deep Convolutional Neural Network to overcome this limitation. The model proposed in this work is consist of 14 layers including 8 convolutional layers and 4 fully connected layers. The CNN model is trained and tested with 61,404 regions-of-interest (ROIs) patches of lung image including 39,760 nodules and 21,644 non-nodules extracted from the Lung Image Database Consortium(LIDC) dataset. We could obtain the classification accuracy of 91.79% with the CNN model presented in this work. To prevent overfitting, we trained the model with Augmented Dataset and regularization term in the cost function. With L1, L2 regularization at Training process, we obtained 92.39%, 92.52% of accuracy respectively. And we obtained 93.52% with data augmentation. In conclusion, we could obtain the accuracy of 93.75% with L2 Regularization and Data Augmentation.

Evolutionary Learning of Sigma-Pi Neural Trees and Its Application to classification and Prediction (시그마파이 신경 트리의 진화적 학습 및 이의 분류 예측에의 응용)

  • 장병탁
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.6 no.2
    • /
    • pp.13-21
    • /
    • 1996
  • The necessity and usefulness of higher-order neural networks have been well-known since early days of neurocomputing. However the explosive number of terms has hampered the design and training of such networks. In this paper we present an evolutionary learning method for efficiently constructing problem-specific higher-order neural models. The crux of the method is the neural tree representation employing both sigma and pi units, in combination with the use of an MDL-based fitness function for learning minimal models. We provide experimental results in classification and prediction problems which demonstrate the effectiveness of the method. I. Introduction topology employs one hidden layer with full connectivity between neighboring layers. This structure has One of the most popular neural network models been very successful for many applications. However, used for supervised learning applications has been the they have some weaknesses. For instance, the fully mutilayer feedforward network. A commonly adopted connected structure is not necessarily a good topology unless the task contains a good predictor for the full *d*dWs %BH%W* input space.

  • PDF