• Title/Summary/Keyword: Neural-Networks

Search Result 4,870, Processing Time 0.031 seconds

Temporal Fusion Transformers and Deep Learning Methods for Multi-Horizon Time Series Forecasting (Temporal Fusion Transformers와 심층 학습 방법을 사용한 다층 수평 시계열 데이터 분석)

  • Kim, InKyung;Kim, DaeHee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.2
    • /
    • pp.81-86
    • /
    • 2022
  • Given that time series are used in various fields, such as finance, IoT, and manufacturing, data analytical methods for accurate time-series forecasting can serve to increase operational efficiency. Among time-series analysis methods, multi-horizon forecasting provides a better understanding of data because it can extract meaningful statistics and other characteristics of the entire time-series. Furthermore, time-series data with exogenous information can be accurately predicted by using multi-horizon forecasting methods. However, traditional deep learning-based models for time-series do not account for the heterogeneity of inputs. We proposed an improved time-series predicting method, called the temporal fusion transformer method, which combines multi-horizon forecasting with interpretable insights into temporal dynamics. Various real-world data such as stock prices, fine dust concentrates and electricity consumption were considered in experiments. Experimental results showed that our temporal fusion transformer method has better time-series forecasting performance than existing models.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

Deep Learning based Time Offset Estimation in GPS Time Transfer Measurement Data (GPS 시각전송 측정데이터에 대한 딥러닝 모델 기반 시각오프셋 예측)

  • Yu, Dong-Hui;Kim, Min-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.456-462
    • /
    • 2022
  • In this paper, we introduce a method of predicting time offset by applying LSTM, a deep learning model, to a precision time comparison technique based on measurement data extracted from code signals transmitted from GPS satellites to determine Universal Coordinated Time (UTC). First, we introduce a process of extracting time information from code signals received from a GPS satellite on a daily basis and constructing a daily time offset into one time series data. To apply the deep learning model to the constructed time offset time series data, LSTM, one of the recurrent neural networks, was applied to predict the time offset of a GPS satellite. Through this study, the possibility of time offset prediction by applying deep learning in the field of GNSS precise time transfer was confirmed.

A Comparison of Meta-learning and Transfer-learning for Few-shot Jamming Signal Classification

  • Jin, Mi-Hyun;Koo, Ddeo-Ol-Ra;Kim, Kang-Suk
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.11 no.3
    • /
    • pp.163-172
    • /
    • 2022
  • Typical anti-jamming technologies based on array antennas, Space Time Adaptive Process (STAP) & Space Frequency Adaptive Process (SFAP), are very effective algorithms to perform nulling and beamforming. However, it does not perform equally well for all types of jamming signals. If the anti-jamming algorithm is not optimized for each signal type, anti-jamming performance deteriorates and the operation stability of the system become worse by unnecessary computation. Therefore, jamming classification technique is required to obtain optimal anti-jamming performance. Machine learning, which has recently been in the spotlight, can be considered to classify jamming signal. In general, performing supervised learning for classification requires a huge amount of data and new learning for unfamiliar signal. In the case of jamming signal classification, it is difficult to obtain large amount of data because outdoor jamming signal reception environment is difficult to configure and the signal type of attacker is unknown. Therefore, this paper proposes few-shot jamming signal classification technique using meta-learning and transfer-learning to train the model using a small amount of data. A training dataset is constructed by anti-jamming algorithm input data within the GNSS receiver when jamming signals are applied. For meta-learning, Model-Agnostic Meta-Learning (MAML) algorithm with a general Convolution Neural Networks (CNN) model is used, and the same CNN model is used for transfer-learning. They are trained through episodic training using training datasets on developed our Python-based simulator. The results show both algorithms can be trained with less data and immediately respond to new signal types. Also, the performances of two algorithms are compared to determine which algorithm is more suitable for classifying jamming signals.

Face Recognition Network using gradCAM (gradCam을 사용한 얼굴인식 신경망)

  • Chan Hyung Baek;Kwon Jihun;Ho Yub Jung
    • Smart Media Journal
    • /
    • v.12 no.2
    • /
    • pp.9-14
    • /
    • 2023
  • In this paper, we proposed a face recognition network which attempts to use more facial features awhile using smaller number of training sets. When combining the neural network together for face recognition, we want to use networks that use different part of the facial features. However, the network training chooses randomly where these facial features are obtained. Other hand, the judgment basis of the network model can be expressed as a saliency map through gradCAM. Therefore, in this paper, we use gradCAM to visualize where the trained face recognition model has made a observations and recognition judgments. Thus, the network combination can be constructed based on the different facial features used. Using this approach, we trained a network for small face recognition problem. In an simple toy face recognition example, the recognition network used in this paper improves the accuracy by 1.79% and reduces the equal error rate (EER) by 0.01788 compared to the conventional approach.

Korean Text Image Super-Resolution for Improving Text Recognition Accuracy (텍스트 인식률 개선을 위한 한글 텍스트 이미지 초해상화)

  • Junhyeong Kwon;Nam Ik Cho
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.178-184
    • /
    • 2023
  • Finding texts in general scene images and recognizing their contents is a very important task that can be used as a basis for robot vision, visual assistance, and so on. However, for the low-resolution text images, the degradations, such as noise or blur included in text images, are more noticeable, which leads to severe performance degradation of text recognition accuracy. In this paper, we propose a new Korean text image super-resolution based on a Transformer-based model, which generally shows higher performance than convolutional neural networks. In the experiments, we show that text recognition accuracy for Korean text images can be improved when our proposed text image super-resolution method is used. We also propose a new Korean text image dataset for training our model, which contains massive HR-LR Korean text image pairs.

Deep learning-based apical lesion segmentation from panoramic radiographs

  • Il-Seok, Song;Hak-Kyun, Shin;Ju-Hee, Kang;Jo-Eun, Kim;Kyung-Hoe, Huh;Won-Jin, Yi;Sam-Sun, Lee;Min-Suk, Heo
    • Imaging Science in Dentistry
    • /
    • v.52 no.4
    • /
    • pp.351-357
    • /
    • 2022
  • Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

Hierarchical IoT Edge Resource Allocation and Management Techniques based on Synthetic Neural Networks in Distributed AIoT Environments (분산 AIoT 환경에서 합성곱신경망 기반 계층적 IoT Edge 자원 할당 및 관리 기법)

  • Yoon-Su Jeong
    • Advanced Industrial SCIence
    • /
    • v.2 no.3
    • /
    • pp.8-14
    • /
    • 2023
  • The majority of IoT devices already employ AIoT, however there are still numerous issues that need to be resolved before AI applications can be deployed. In order to more effectively distribute IoT edge resources, this paper propose a machine learning-based approach to managing IoT edge resources. The suggested method constantly improves the allocation of IoT resources by identifying IoT edge resource trends using machine learning. IoT resources that have been optimized make use of machine learning convolution to reliably sustain IoT edge resources that are always changing. By storing each machine learning-based IoT edge resource as a hash value alongside the resource of the previous pattern, the suggested approach effectively verifies the resource as an attack pattern in a distributed AIoT context. Experimental results evaluate energy efficiency in three different test scenarios to verify the integrity of IoT Edge resources to see if they work well in complex environments with heterogeneous computational hardware.

Long term discharge simulation using an Long Short-Term Memory(LSTM) and Multi Layer Perceptron(MLP) artificial neural networks: Forecasting on Oshipcheon watershed in Samcheok (장단기 메모리(LSTM) 및 다층퍼셉트론(MLP) 인공신경망 앙상블을 이용한 장기 강우유출모의: 삼척 오십천 유역을 대상으로)

  • Sung Wook An;Byng Sik Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.206-206
    • /
    • 2023
  • 지구온난화로 인한 기후변화에 따라 평균강수량과 증발량이 증가하며 강우지역 집중화와 강우강도가 높아질 가능성이 크다. 우리나라의 경우 협소한 국토면적과 높은 인구밀도로 기후변동의 영향이 크기 때문에 한반도에 적합한 유역규모의 수자원 예측과 대응방안을 마련해야 한다. 이를 위한 수자원 관리를 위해서는 유역에서 강수량, 유출량, 증발량 등의 장기적인 자료가 필요하며 경험식, 물리적 강우-유출 모형 등이 사용되었고, 최근들어 연구의 확장성과 비 선형성 등을 고려하기 위해 딥러닝등 인공지능 기술들이 접목되고 있다. 본 연구에서는 ASOS(동해, 태백)와 AWS(삼척, 신기, 도계) 5곳의 관측소에서 2011년~2020년까지의 일 단위 기상관측자료를 수집하고 WAMIS에서 같은 기간의 오십천 하구 일 유출량 자료를 수집 후 5개 관측소를 기준으로Thiessen 면적비를 적용해 기상자료를 구축했으며 Angstrom & Hargreaves 공식으로 잠재증발산량 산정해 3개의 모델에 각각 기상자료(일 강수량, 최고기온, 최대 순간 풍속, 최저기온, 평균풍속, 평균기온), 일 강수량과 잠재증발산량, 일 강수량 - 잠재증발산량을 학습 후 관측 유출량과 비교결과 기상자료(일 강수량, 최고기온, 최대 순간 풍속, 최저기온, 평균풍속, 평균기온)로 학습한 모델성능이 가장 높아 최적 모델로 선정했으며 일, 월, 연 관측유출량 시계열과 비교했다. 또한 같은 학습자료를 사용해 다층 퍼셉트론(Multi Layer Perceptron, MLP) 앙상블 모델을 구축하여 수자원 분야에서의 인공지능 활용성을 평가했다.

  • PDF

Boundary-enhanced SAR Water Segmentation using Adversarial Learning of Deep Neural Networks (적대적 학습 개념을 도입한 경계 강화 SAR 수체탐지 딥러닝 모델)

  • Hwisong Kim;Duk-jin Kim;Junwoo Kim;Seungwoo Lee
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.2-2
    • /
    • 2023
  • 기후변화가 가속화로 인해 수재해의 빈도와 강도 예측이 어려워짐에 따라 실시간 홍수 모니터링에 대한 수요가 증가하고 있다. 합성개구레이다는 광원과 날씨에 무관하게 촬영이 가능하여 수재해 발생시에도 영상을 확보할 수 있다. 합성개구레이다를 활용한 수체 탐지 알고리즘 개발이 활발히 연구되어 왔고, 딥러닝의 발달로 CNN을 활용하여 높은 정확도로 수체 탐지가 기능해졌다. 하지만, CNN 기반 수체 탐지 모델은 훈련시 높은 정량적 정확성 지표를 달성하여도 추론 후 정성적 평가시 경계와 소하천에 대한 탐지 정확성이 떨어진다. 홍수 모니터링에서 특히 중요한 정보인 경계와 좁은 하천에 대해서 정확성이 떨어짐에 따라 실생활 적용이 어렵다. 이에 경계를 강화한 적대적 학습 기반의 수체 탐지 모델을 개발하여 더 세밀하고 정확하게 탐지하고자 한다. 적대적 학습은 생성적 적대 신경망(GAN)의 두 개의 모델인 생성자와 판별자가 서로 관여하며 더 높은 정확도를 달성할 수 있도록 학습이다. 이러한 적대적 학습 개념을 수체 탐지 모델에 처음으로 도입하여, 생성자는 실제 라벨 데이터와 유사하게 수체 경계와 소하천까지 탐지하고자 학습한다. 반면 판별자는 경계 거리 변환 맵과 합성개구레이다 영상을 기반으로 라벨데이터와 수체 탐지 결과를 구분한다. 경계가 강화될 수 있도록, 면적과 경계를 모두 고려할 수 있는 손실함수 조합을 구성하였다. 제안 모델이 경계와 소하천을 정확히 탐지하는지 판단하기 위해, 정량적 지표로 F1-score를 사용하였으며, 육안 판독을 통해 정성적 평가도 진행하였다. 기존 U-Net 모델이 탐지하지 못하던 영역에 대해 제안한 경계 강화 적대적 수체 탐지 모델이 수체의 세밀한 부분까지 탐지할 수 있음을 증명하였다.

  • PDF