• Title/Summary/Keyword: Neural Net

Search Result 743, Processing Time 0.029 seconds

Neural Network Based Simulation of Poisson Boltzmann Equation (뉴럴네트워크를 통한 Poisson Boltzmann 방정식의 시뮬레이션)

  • Jo, Gwanghyun;Shin, Kwang-Seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.138-139
    • /
    • 2021
  • This work introduces neural network based simulation for Poisson Boltzmann equation. First, samples are generated via a finite element method, whose pairs are used to train neural network. We report the performance of the neural network.

  • PDF

Parameter-Efficient Neural Networks Using Template Reuse (템플릿 재사용을 통한 패러미터 효율적 신경망 네트워크)

  • Kim, Daeyeon;Kang, Woochul
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.9 no.5
    • /
    • pp.169-176
    • /
    • 2020
  • Recently, deep neural networks (DNNs) have brought revolutions to many mobile and embedded devices by providing human-level machine intelligence for various applications. However, high inference accuracy of such DNNs comes at high computational costs, and, hence, there have been significant efforts to reduce computational overheads of DNNs either by compressing off-the-shelf models or by designing a new small footprint DNN architecture tailored to resource constrained devices. One notable recent paradigm in designing small footprint DNN models is sharing parameters in several layers. However, in previous approaches, the parameter-sharing techniques have been applied to large deep networks, such as ResNet, that are known to have high redundancy. In this paper, we propose a parameter-sharing method for already parameter-efficient small networks such as ShuffleNetV2. In our approach, small templates are combined with small layer-specific parameters to generate weights. Our experiment results on ImageNet and CIFAR100 datasets show that our approach can reduce the size of parameters by 15%-35% of ShuffleNetV2 while achieving smaller drops in accuracies compared to previous parameter-sharing and pruning approaches. We further show that the proposed approach is efficient in terms of latency and energy consumption on modern embedded devices.

Fully Automatic Segmentation of Acute Ischemic Lesions on Diffusion-Weighted Imaging Using Convolutional Neural Networks: Comparison with Conventional Algorithms

  • Ilsang Woo;Areum Lee;Seung Chai Jung;Hyunna Lee;Namkug Kim;Se Jin Cho;Donghyun Kim;Jungbin Lee;Leonard Sunwoo;Dong-Wha Kang
    • Korean Journal of Radiology
    • /
    • v.20 no.8
    • /
    • pp.1275-1284
    • /
    • 2019
  • Objective: To develop algorithms using convolutional neural networks (CNNs) for automatic segmentation of acute ischemic lesions on diffusion-weighted imaging (DWI) and compare them with conventional algorithms, including a thresholding-based segmentation. Materials and Methods: Between September 2005 and August 2015, 429 patients presenting with acute cerebral ischemia (training:validation:test set = 246:89:94) were retrospectively enrolled in this study, which was performed under Institutional Review Board approval. Ground truth segmentations for acute ischemic lesions on DWI were manually drawn under the consensus of two expert radiologists. CNN algorithms were developed using two-dimensional U-Net with squeeze-and-excitation blocks (U-Net) and a DenseNet with squeeze-and-excitation blocks (DenseNet) with squeeze-and-excitation operations for automatic segmentation of acute ischemic lesions on DWI. The CNN algorithms were compared with conventional algorithms based on DWI and the apparent diffusion coefficient (ADC) signal intensity. The performances of the algorithms were assessed using the Dice index with 5-fold cross-validation. The Dice indices were analyzed according to infarct volumes (< 10 mL, ≥ 10 mL), number of infarcts (≤ 5, 6-10, ≥ 11), and b-value of 1000 (b1000) signal intensities (< 50, 50-100, > 100), time intervals to DWI, and DWI protocols. Results: The CNN algorithms were significantly superior to conventional algorithms (p < 0.001). Dice indices for the CNN algorithms were 0.85 for U-Net and DenseNet and 0.86 for an ensemble of U-Net and DenseNet, while the indices were 0.58 for ADC-b1000 and b1000-ADC and 0.52 for the commercial ADC algorithm. The Dice indices for small and large lesions, respectively, were 0.81 and 0.88 with U-Net, 0.80 and 0.88 with DenseNet, and 0.82 and 0.89 with the ensemble of U-Net and DenseNet. The CNN algorithms showed significant differences in Dice indices according to infarct volumes (p < 0.001). Conclusion: The CNN algorithm for automatic segmentation of acute ischemic lesions on DWI achieved Dice indices greater than or equal to 0.85 and showed superior performance to conventional algorithms.

Neuro-Net Based Automatic Sorting And Grading of A Mushroom (Lentinus Edodes L)

  • Hwang, H.;Lee, C.H.;Han, J.H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.1243-1253
    • /
    • 1993
  • Visual features of a mushroom(Lentinus Edodes L) are critical in sorting and grading as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. Though actions involved in human grading looks simple, a decision making undereath the simple action comes form the results of the complex neural processing of the visual image. And processing details involved in the visual recognition of the human brain has not been fully investigated yet. Recently, however, an artificial neural network has drawn a great attention because of its functional capability as a partial substitute of the human brain. Since most agricultural products are not uniquely defined in its physical properties and do not have a well defined job structure, a research of the neuro-net based human like information processing toward the agricultural product and processing are widely open and promising. In this pape , neuro-net based grading and sorting system was developed for a mushroom . A computer vision system was utilized for extracting and quantifying the qualitative visual features of sampled mushrooms. The extracted visual features and their corresponding grades were used as input/output pairs for training the neural network and the trained results of the network were presented . The computer vision system used is composed of the IBM PC compatible 386DX, ITEX PFG frame grabber, B/W CCD camera , VGA color graphic monitor , and image output RGB monitor.

  • PDF

A STUDY ON THE IMPLEMENTATION OF ARTIFICIAL NEURAL NET MODELS WITH FEATURE SET INPUT FOR RECOGNITION OF KOREAN PLOSIVE CONSONANTS (한국어 파열음 인식을 위한 피쳐 셉 입력 인공 신경망 모델에 관한 연구)

  • Kim, Ki-Seok;Kim, In-Bum;Hwang, Hee-Yeung
    • Proceedings of the KIEE Conference
    • /
    • 1990.07a
    • /
    • pp.535-538
    • /
    • 1990
  • The main problem in speech recognition is the enormous variability in acoustic signals due to complex but predictable contextual effects. Especially in plosive consonants it is very difficult to find invariant cue due to various contextual effects, but humans use these contextual effects as helpful information in plosive consonant recognition. In this paper we experimented on three artificial neural net models for the recognition of plosive consonants. Neural Net Model I used "Multi-layer Perceptron ". Model II used a variation of the "Self-organizing Feature Map Model". And Model III used "Interactive and Competitive Model" to experiment contextual effects. The recognition experiment was performed on 9 Korean plosive consonants. We used VCV speech chains for the experiment on contextual effects. The speech chain consists of Korean plosive consonants /g, d, b, K, T, P, k, t, p/ (/ㄱ, ㄷ, ㅂ, ㄲ, ㄸ, ㅃ, ㅋ, ㅌ, ㅍ/) and eight Korean monothongs. The inputs to Neural Net Models were several temporal cues - duration of the silence, transition and vot -, and the extent of the VC formant transitions to the presence of voicing energy during closure, burst intensity, presence of asperation, amount of low frequency energy present at voicing onset, and CV formant transition extent from the acoustic signals. Model I showed about 55 - 67 %, Model II showed about 60%, and Model III showed about 67% recognition rate.

  • PDF

Conversion Tools of Spiking Deep Neural Network based on ONNX (ONNX기반 스파이킹 심층 신경망 변환 도구)

  • Park, Sangmin;Heo, Junyoung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.2
    • /
    • pp.165-170
    • /
    • 2020
  • The spiking neural network operates in a different mechanism than the existing neural network. The existing neural network transfers the output value to the next neuron via an activation function that does not take into account the biological mechanism for the input value to the neuron that makes up the neural network. In addition, there have been good results using deep structures such as VGGNet, ResNet, SSD and YOLO. spiking neural networks, on the other hand, operate more like the biological mechanism of real neurons than the existing activation function, but studies of deep structures using spiking neurons have not been actively conducted compared to in-depth neural networks using conventional neurons. This paper proposes the method of loading an deep neural network model made from existing neurons into a conversion tool and converting it into a spiking deep neural network through the method of replacing an existing neuron with a spiking neuron.

Instagram image classification with Deep Learning (딥러닝을 이용한 인스타그램 이미지 분류)

  • Jeong, Nokwon;Cho, Soosun
    • Journal of Internet Computing and Services
    • /
    • v.18 no.5
    • /
    • pp.61-67
    • /
    • 2017
  • In this paper we introduce two experimental results from classification of Instagram images and some valuable lessons from them. We have tried some experiments for evaluating the competitive power of Convolutional Neural Network(CNN) in classification of real social network images such as Instagram images. We used AlexNet and ResNet, which showed the most outstanding capabilities in ImageNet Large Scale Visual Recognition Challenge(ILSVRC) 2012 and 2015, respectively. And we used 240 Instagram images and 12 pre-defined categories for classifying social network images. Also, we performed fine-tuning using Inception V3 model, and compared those results. In the results of four cases of AlexNet, ResNet, Inception V3 and fine-tuned Inception V3, the Top-1 error rates were 49.58%, 40.42%, 30.42%, and 5.00%. And the Top-5 error rates were 35.42%, 25.00%, 20.83%, and 0.00% respectively.

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • Annual Conference on Human and Language Technology
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF

Skin Lesion Image Segmentation Based on Adversarial Networks

  • Wang, Ning;Peng, Yanjun;Wang, Yuanhong;Wang, Meiling
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.6
    • /
    • pp.2826-2840
    • /
    • 2018
  • Traditional methods based active contours or region merging are powerless in processing images with blurring border or hair occlusion. In this paper, a structure based convolutional neural networks is proposed to solve segmentation of skin lesion image. The structure mainly consists of two networks which are segmentation net and discrimination net. The segmentation net is designed based U-net that used to generate the mask of lesion, while the discrimination net is designed with only convolutional layers that used to determine whether input image is from ground truth labels or generated images. Images were obtained from "Skin Lesion Analysis Toward Melanoma Detection" challenge which was hosted by ISBI 2016 conference. We achieved segmentation average accuracy of 0.97, dice coefficient of 0.94 and Jaccard index of 0.89 which outperform the other existed state-of-the-art segmentation networks, including winner of ISBI 2016 challenge for skin melanoma segmentation.

S2-Net: Korean Machine Reading Comprehension with SRU-based Self-matching Network (S2-Net: SRU 기반 Self-matching Network를 이용한 한국어 기계 독해)

  • Park, Cheoneum;Lee, Changki;Hong, Sulyn;Hwang, Yigyu;Yoo, Taejoon;Kim, Hyunki
    • 한국어정보학회:학술대회논문집
    • /
    • 2017.10a
    • /
    • pp.35-40
    • /
    • 2017
  • 기계 독해(Machine reading comprehension)는 주어진 문맥을 이해하고, 질문에 적합한 답을 문맥 내에서 찾는 문제이다. Simple Recurrent Unit (SRU)은 Gated Recurrent Unit (GRU)등과 같이 neural gate를 이용하여 Recurrent Neural Network (RNN)에서 발생하는 vanishing gradient problem을 해결하고, gate 입력에서 이전 hidden state를 제거하여 GRU보다 속도를 향상시킨 모델이며, Self-matching Network는 R-Net 모델에서 사용된 것으로, 자기 자신의 RNN sequence에 대하여 어텐션 가중치 (attention weight)를 계산하여 비슷한 의미 문맥 정보를 볼 수 있기 때문에 상호참조해결과 유사한 효과를 볼 수 있다. 본 논문에서는 한국어 기계 독해 데이터 셋을 구축하고, 여러 층의 SRU를 이용한 Encoder에 Self-matching layer를 추가한 $S^2$-Net 모델을 제안한다. 실험 결과, 본 논문에서 제안한 $S^2$-Net 모델이 한국어 기계 독해 데이터 셋에서 EM 65.84%, F1 78.98%의 성능을 보였다.

  • PDF