• 제목/요약/키워드: neural net

검색결과 746건 처리시간 0.031초

저연산량의 효율적인 콘볼루션 신경망 (Efficient Convolutional Neural Network with low Complexity)

  • 이찬호;이중경;호콩안
    • 전기전자학회논문지
    • /
    • 제24권3호
    • /
    • pp.685-690
    • /
    • 2020
  • 휴대용 기기나 에지 단말을 위한 CNN인 MobileNet V2를 기반으로 연산량을 크게 줄이면서도 정확도는 증가시킨 효율적인 인공신경망 네트워크 구조를 제안한다. 제안하는 구조는 Bottleneck 층 구조를 유지하면서 확장 계수를 증가시키고 일부 층을 제거하는 등의 변화를 통해 연산량을 절반 이하로 줄였다. 설계한 네트워크는 ImageNet100 데이터셋을 이용하여 분류 정확도와 CPU 및 GPU에서의 연산 시간을 측정하여 그 성능을 검증 하였다. 또한, 현재 딥러닝 가속기로 널리 이용하는 GPU에서 네트워크 구조에 따라 동작 성능이 달라짐도 보였다.

VS3-NET: Neural variational inference model for machine-reading comprehension

  • Park, Cheoneum;Lee, Changki;Song, Heejun
    • ETRI Journal
    • /
    • 제41권6호
    • /
    • pp.771-781
    • /
    • 2019
  • We propose the VS3-NET model to solve the task of question answering questions with machine-reading comprehension that searches for an appropriate answer in a given context. VS3-NET is a model that trains latent variables for each question using variational inferences based on a model of a simple recurrent unit-based sentences and self-matching networks. The types of questions vary, and the answers depend on the type of question. To perform efficient inference and learning, we introduce neural question-type models to approximate the prior and posterior distributions of the latent variables, and we use these approximated distributions to optimize a reparameterized variational lower bound. The context given in machine-reading comprehension usually comprises several sentences, leading to performance degradation caused by context length. Therefore, we model a hierarchical structure using sentence encoding, in which as the context becomes longer, the performance degrades. Experimental results show that the proposed VS3-NET model has an exact-match score of 76.8% and an F1 score of 84.5% on the SQuAD test set.

S2-Net: Machine reading comprehension with SRU-based self-matching networks

  • Park, Cheoneum;Lee, Changki;Hong, Lynn;Hwang, Yigyu;Yoo, Taejoon;Jang, Jaeyong;Hong, Yunki;Bae, Kyung-Hoon;Kim, Hyun-Ki
    • ETRI Journal
    • /
    • 제41권3호
    • /
    • pp.371-382
    • /
    • 2019
  • Machine reading comprehension is the task of understanding a given context and finding the correct response in that context. A simple recurrent unit (SRU) is a model that solves the vanishing gradient problem in a recurrent neural network (RNN) using a neural gate, such as a gated recurrent unit (GRU) and long short-term memory (LSTM); moreover, it removes the previous hidden state from the input gate to improve the speed compared to GRU and LSTM. A self-matching network, used in R-Net, can have a similar effect to coreference resolution because the self-matching network can obtain context information of a similar meaning by calculating the attention weight for its own RNN sequence. In this paper, we construct a dataset for Korean machine reading comprehension and propose an $S^2-Net$ model that adds a self-matching layer to an encoder RNN using multilayer SRU. The experimental results show that the proposed $S^2-Net$ model has performance of single 68.82% EM and 81.25% F1, and ensemble 70.81% EM, 82.48% F1 in the Korean machine reading comprehension test dataset, and has single 71.30% EM and 80.37% F1 and ensemble 73.29% EM and 81.54% F1 performance in the SQuAD dev dataset.

상관된 시계열 자료 모니터링을 위한 다변량 누적합 관리도 (Multivariate CUSUM Chart to Monitor Correlated Multivariate Time-series Observations)

  • 이규영;이미림
    • 품질경영학회지
    • /
    • 제49권4호
    • /
    • pp.539-550
    • /
    • 2021
  • Purpose: The purpose of this study is to propose a multivariate CUSUM control chart that can detect the out-of-control state fast while monitoring the cross- and auto- correlated multivariate time series data. Methods: We first build models to estimate the observation data and calculate the corresponding residuals. After then, a multivariate CUSUM chart is applied to monitor the residuals instead of the original raw observation data. Vector Autoregression and Artificial Neural Net are selected for the modelling, and Separated-MCUSUM chart is selected for the monitoring. The suggested methods are tested under a number of experimental settings and the performances are compared with those of other existing methods. Results: We find that Artificial Neural Net is more appropriate than Vector Autoregression for the modelling and show the combination of Separated-MCUSUM with Artificial Neural Net outperforms the other alternatives considered in this paper. Conclusion: The suggested chart has many advantages. It can monitor the complicated multivariate data with cross- and auto- correlation, and detects the out-of-control state fast. Unlike other CUSUM charts finding their control limits by trial and error simulation, the suggested chart saves lots of time and effort by approximating its control limit mathematically. We expect that the suggested chart performs not only effectively but also efficiently for monitoring the process with complicated correlations and frequently-changed parameters.

CCTV 영상의 이상행동 다중 분류를 위한 결합 인공지능 모델에 관한 연구 (A Study on Combine Artificial Intelligence Models for multi-classification for an Abnormal Behaviors in CCTV images)

  • 이홍래;김영태;서병석
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2022년도 춘계학술대회
    • /
    • pp.498-500
    • /
    • 2022
  • CCTV는 위험 상황을 파악하고 신속히 대응함으로써, 인명과 자산을 안전하게 보호한다. 하지만, 점점 많아지는 CCTV 영상을 지속적으로 모니터링하기는 어렵다. 이런 이유로 CCTV 영상을 지속적으로 모니터링하면서 이상행동이 발생했을 때 알려주는 장치가 필요하다. 최근 영상데이터 분석에 인공지능 모델을 활용한 많은 연구가 이루어지고 있다. 본 연구는 CCTV 영상에서 관측할 수 있는 다양한 이상 행동을 분류하기 위해 영상데이터 사이의 공간적, 시간적 특성 정보를 동시에 학습한다. 학습에 이용되는 인공지능 모델로 End-to-End 방식의 3D-Convolution Neural Network(CNN)와 ResNet을 결합한 다중 분류 딥러닝 모델을 제안한다.

  • PDF

Convolutional Neural Networks기반 항공영상 영역분할 및 분류 (Aerial Scene Labeling Based on Convolutional Neural Networks)

  • 나종필;황승준;박승제;백중환
    • 한국항행학회논문지
    • /
    • 제19권6호
    • /
    • pp.484-491
    • /
    • 2015
  • 항공영상은 디지털 광학 영상 기술의 성장과 무인기(UAV)의 발달로 인하여 영상의 도입 및 공급이 크게 증가하였고, 이러한 항공영상 데이터를 기반으로 지상의 속성 추출, 분류, 변화탐지, 영상 융합, 지도 제작 형태로 활용되고 있다. 특히, 영상분석 및 활용에 있어 딥 러닝 알고리즘은 패턴인식 분야의 한계를 극복하는 새로운 패러다임을 보여주고 있다. 본 논문은 딥 러닝 알고리즘인 ConvNet기반으로 항공영상의 영역분할 및 분류 결과를 통한 더욱더 넓은 범위와 다양한 분야에 적용할 수 있는 가능성을 제시한다. 학습데이터는 도로, 건물, 평지, 숲 총 3000개 4-클래스로 구축하였고 클래스 별로 일정한 패턴을 가지고 있어 특징 벡터맵을 통한 결과가 서로 다르게 나옴을 확인할 수 있다. 본 연구의 알고리즘은 크게 두 가지로 구성 되어 있는데 특징추출은 ConvNet기반으로 2개의 층을 쌓았고, 분류 및 학습과정으로 다층 퍼셉트론과 로지스틱회귀 알고리즘을 활용하여 특징들을 분류 및 학습시켰다.

Automatic Wood Species Identification of Korean Softwood Based on Convolutional Neural Networks

  • Kwon, Ohkyung;Lee, Hyung Gu;Lee, Mi-Rim;Jang, Sujin;Yang, Sang-Yun;Park, Se-Yeong;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • 제45권6호
    • /
    • pp.797-808
    • /
    • 2017
  • Automatic wood species identification systems have enabled fast and accurate identification of wood species outside of specialized laboratories with well-trained experts on wood species identification. Conventional automatic wood species identification systems consist of two major parts: a feature extractor and a classifier. Feature extractors require hand-engineering to obtain optimal features to quantify the content of an image. A Convolutional Neural Network (CNN), which is one of the Deep Learning methods, trained for wood species can extract intrinsic feature representations and classify them correctly. It usually outperforms classifiers built on top of extracted features with a hand-tuning process. We developed an automatic wood species identification system utilizing CNN models such as LeNet, MiniVGGNet, and their variants. A smartphone camera was used for obtaining macroscopic images of rough sawn surfaces from cross sections of woods. Five Korean softwood species (cedar, cypress, Korean pine, Korean red pine, and larch) were under classification by the CNN models. The highest and most stable CNN model was LeNet3 that is two additional layers added to the original LeNet architecture. The accuracy of species identification by LeNet3 architecture for the five Korean softwood species was 99.3%. The result showed the automatic wood species identification system is sufficiently fast and accurate as well as small to be deployed to a mobile device such as a smartphone.

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권8호
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

The development of food image detection and recognition model of Korean food for mobile dietary management

  • Park, Seon-Joo;Palvanov, Akmaljon;Lee, Chang-Ho;Jeong, Nanoom;Cho, Young-Im;Lee, Hae-Jeung
    • Nutrition Research and Practice
    • /
    • 제13권6호
    • /
    • pp.521-528
    • /
    • 2019
  • BACKGROUND/OBJECTIVES: The aim of this study was to develop Korean food image detection and recognition model for use in mobile devices for accurate estimation of dietary intake. MATERIALS/METHODS: We collected food images by taking pictures or by searching web images and built an image dataset for use in training a complex recognition model for Korean food. Augmentation techniques were performed in order to increase the dataset size. The dataset for training contained more than 92,000 images categorized into 23 groups of Korean food. All images were down-sampled to a fixed resolution of $150{\times}150$ and then randomly divided into training and testing groups at a ratio of 3:1, resulting in 69,000 training images and 23,000 test images. We used a Deep Convolutional Neural Network (DCNN) for the complex recognition model and compared the results with those of other networks: AlexNet, GoogLeNet, Very Deep Convolutional Neural Network, VGG and ResNet, for large-scale image recognition. RESULTS: Our complex food recognition model, K-foodNet, had higher test accuracy (91.3%) and faster recognition time (0.4 ms) than those of the other networks. CONCLUSION: The results showed that K-foodNet achieved better performance in detecting and recognizing Korean food compared to other state-of-the-art models.

NNGPC를 이용한 유압모터의 고정도 위치제어 (Accurate Position Control of Hydraulic Motor Using NNGPC)

  • 박동재;안경관;이수한
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2000년도 제15차 학술회의논문집
    • /
    • pp.143-143
    • /
    • 2000
  • A neural net based generalized predictive control(NNGPC) is presented for a hydraulic servo position control system. The proposed scheme employs generalized predictive control, where the future output being generated from the output of artificial neural networks. The proposed NNGPC does not require an accurate mathematical model for the nonlinear hydraulic system and takes less calculation time than GPC algorithm if the teaming of neural network is done. Simulation studies have been conducted on the position control of a hydraulic motor to validate and illustrate the proposed method.

  • PDF