• Title/Summary/Keyword: transformer-based models

Search Result 89, Processing Time 0.023 seconds

Impedance design of tap changing auto transformer based LVRT/HVRT test device (탭 변환 단권변압기 기반 LVRT/HVRT 시험장비의 임피던스 설계)

  • Baek, Seung-Hyuk;Kim, Dong-Uk;Yoon, Young-Doo;Kim, Sungmin
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.216-224
    • /
    • 2020
  • This paper proposes an impedance design method of the test device for evaluating Low Voltage Ride Through(LVRT) and High Voltage Ride Through(HVRT) functions. The LVRT/HVRT test device should have ability to generate the fault voltage specified in the grid code for a certain period and to limit the magnitude of the fault current with the design specification. In this paper, the impedance design method for auto transformer is proposed based on a equivalent model of a tap-change auto-transformer during LVRT/HVRT operation. In addition, to generate various fault voltages required the LVRT/HVRT test, tap impedance design in the auto transformer is considered. To verify the validity of the proposed design method, the design process of the 10MVA LVRT/HVRT test device was conducted and the design results was verified through simulation models.

Analysis of Resonant Characteristics in High Voltage Windings of Main Transformer for Railway Vehicle using EMTP (EMTP를 이용한 철도차량용 주변압기 고압권선의 공진특성 분석)

  • Jeong, Ki-Seok;Jang, Dong-Uk;Chung, Jong-Duk
    • Journal of the Korean Society for Railway
    • /
    • v.19 no.4
    • /
    • pp.436-444
    • /
    • 2016
  • The primary windings of the main transformer for rolling stock have several natural frequencies that can occur internal resonance with transient voltages induced on a high voltage feeding line. Factory testing is limited in its ability to determine whether or not transient voltage with various shape and duration can be excitable. This study presents the design of a high voltage windings model and simulation and analysis of the internal resonant characteristics in terms of the initial voltage distribution and voltage-frequency relationship using the electromagnetic transients program (EMTP). Turn-based lumped-parameters are calculated using the geometry data of the transformer. And, sub-models, being grouped into the total number of layers, are composed using a ladder-network model and implemented by the library function of EMTP. Case studies are used to show the layer-based voltage-frequency relationship characteristics according to the frequency sweep and the voltage escalation and distribution aspects in time-domain simulation.

Harmonic Analysis Model based on PSCAD/EMTDC

  • Lee, Han-Min;Lee, Chang-Mu;Jang, Gil-Soo;Kwon, Sae-Hyuk
    • KIEE International Transactions on Power Engineering
    • /
    • v.4A no.3
    • /
    • pp.115-121
    • /
    • 2004
  • This paper presents the model for an AC electric railway system using the PSCAD/EMTDC program. It is composed of a scott-transformer, an auto-transformer, catenary and electric trains, etc. After obtaining the models of the fundamental elements describing the AC electric railway system and its behavior, we have analyzed and tested an actual AC electric railway system focused on the amplification of harmonic current to verify the proposed model. The simulation results from the proposed approach and the measurement data from the test are described.

Temporal Fusion Transformers and Deep Learning Methods for Multi-Horizon Time Series Forecasting (Temporal Fusion Transformers와 심층 학습 방법을 사용한 다층 수평 시계열 데이터 분석)

  • Kim, InKyung;Kim, DaeHee;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.2
    • /
    • pp.81-86
    • /
    • 2022
  • Given that time series are used in various fields, such as finance, IoT, and manufacturing, data analytical methods for accurate time-series forecasting can serve to increase operational efficiency. Among time-series analysis methods, multi-horizon forecasting provides a better understanding of data because it can extract meaningful statistics and other characteristics of the entire time-series. Furthermore, time-series data with exogenous information can be accurately predicted by using multi-horizon forecasting methods. However, traditional deep learning-based models for time-series do not account for the heterogeneity of inputs. We proposed an improved time-series predicting method, called the temporal fusion transformer method, which combines multi-horizon forecasting with interpretable insights into temporal dynamics. Various real-world data such as stock prices, fine dust concentrates and electricity consumption were considered in experiments. Experimental results showed that our temporal fusion transformer method has better time-series forecasting performance than existing models.

Realtime Detection of Benthic Marine Invertebrates from Underwater Images: A Comparison betweenYOLO and Transformer Models (수중영상을 이용한 저서성 해양무척추동물의 실시간 객체 탐지: YOLO 모델과 Transformer 모델의 비교평가)

  • Ganghyun Park;Suho Bak;Seonwoong Jang;Shinwoo Gong;Jiwoo Kwak;Yangwon Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.909-919
    • /
    • 2023
  • Benthic marine invertebrates, the invertebrates living on the bottom of the ocean, are an essential component of the marine ecosystem, but excessive reproduction of invertebrate grazers or pirate creatures can cause damage to the coastal fishery ecosystem. In this study, we compared and evaluated You Only Look Once Version 7 (YOLOv7), the most widely used deep learning model for real-time object detection, and detection tansformer (DETR), a transformer-based model, using underwater images for benthic marine invertebratesin the coasts of South Korea. YOLOv7 showed a mean average precision at 0.5 (mAP@0.5) of 0.899, and DETR showed an mAP@0.5 of 0.862, which implies that YOLOv7 is more appropriate for object detection of various sizes. This is because YOLOv7 generates the bounding boxes at multiple scales that can help detect small objects. Both models had a processing speed of more than 30 frames persecond (FPS),so it is expected that real-time object detection from the images provided by divers and underwater drones will be possible. The proposed method can be used to prevent and restore damage to coastal fisheries ecosystems, such as rescuing invertebrate grazers and creating sea forests to prevent ocean desertification.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification

Technical Trends in Hyperscale Artificial Intelligence Processors (초거대 인공지능 프로세서 반도체 기술 개발 동향)

  • W. Jeon;C.G. Lyuh
    • Electronics and Telecommunications Trends
    • /
    • v.38 no.5
    • /
    • pp.1-11
    • /
    • 2023
  • The emergence of generative hyperscale artificial intelligence (AI) has enabled new services, such as image-generating AI and conversational AI based on large language models. Such services likely lead to the influx of numerous users, who cannot be handled using conventional AI models. Furthermore, the exponential increase in training data, computations, and high user demand of AI models has led to intensive hardware resource consumption, highlighting the need to develop domain-specific semiconductors for hyperscale AI. In this technical report, we describe development trends in technologies for hyperscale AI processors pursued by domestic and foreign semiconductor companies, such as NVIDIA, Graphcore, Tesla, Google, Meta, SAPEON, FuriosaAI, and Rebellions.

Style-Based Transformer for Time Series Forecasting (시계열 예측을 위한 스타일 기반 트랜스포머)

  • Kim, Dong-Keon;Kim, Kwangsu
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.12
    • /
    • pp.579-586
    • /
    • 2021
  • Time series forecasting refers to predicting future time information based on past time information. Accurately predicting future information is crucial because it is used for establishing strategies or making policy decisions in various fields. Recently, a transformer model has been mainly studied for a time series prediction model. However, the existing transformer model has a limitation in that it has an auto-regressive structure in which the output result is input again when the prediction sequence is output. This limitation causes a problem in that accuracy is lowered when predicting a distant time point. This paper proposes a sequential decoding model focusing on the style transformation technique to handle these problems and make more precise time series forecasting. The proposed model has a structure in which the contents of past data are extracted from the transformer-encoder and reflected in the style-based decoder to generate the predictive sequence. Unlike the decoder structure of the conventional auto-regressive transformer, this structure has the advantage of being able to more accurately predict information from a distant view because the prediction sequence is output all at once. As a result of conducting a prediction experiment with various time series datasets with different data characteristics, it was shown that the model presented in this paper has better prediction accuracy than other existing time series prediction models.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Empirical Study for Automatic Evaluation of Abstractive Summarization by Error-Types (오류 유형에 따른 생성요약 모델의 본문-요약문 간 요약 성능평가 비교)

  • Seungsoo Lee;Sangwoo Kang
    • Korean Journal of Cognitive Science
    • /
    • v.34 no.3
    • /
    • pp.197-226
    • /
    • 2023
  • Generative Text Summarization is one of the Natural Language Processing tasks. It generates a short abbreviated summary while preserving the content of the long text. ROUGE is a widely used lexical-overlap based metric for text summarization models in generative summarization benchmarks. Although it shows very high performance, the studies report that 30% of the generated summary and the text are still inconsistent. This paper proposes a methodology for evaluating the performance of the summary model without using the correct summary. AggreFACT is a human-annotated dataset that classifies the types of errors in neural text summarization models. Among all the test candidates, the two cases, generation summary, and when errors occurred throughout the summary showed the highest correlation results. We observed that the proposed evaluation score showed a high correlation with models finetuned with BART and PEGASUS, which is pretrained with a large-scale Transformer structure.