• Title/Summary/Keyword: transformer-based models

Search Result 89, Processing Time 0.023 seconds

Deep Learning Models for Fabric Image Defect Detection: Experiments with Transformer-based Image Segmentation Models (직물 이미지 결함 탐지를 위한 딥러닝 기술 연구: 트랜스포머 기반 이미지 세그멘테이션 모델 실험)

  • Lee, Hyun Sang;Ha, Sung Ho;Oh, Se Hwan
    • The Journal of Information Systems
    • /
    • v.32 no.4
    • /
    • pp.149-162
    • /
    • 2023
  • Purpose In the textile industry, fabric defects significantly impact product quality and consumer satisfaction. This research seeks to enhance defect detection by developing a transformer-based deep learning image segmentation model for learning high-dimensional image features, overcoming the limitations of traditional image classification methods. Design/methodology/approach This study utilizes the ZJU-Leaper dataset to develop a model for detecting defects in fabrics. The ZJU-Leaper dataset includes defects such as presses, stains, warps, and scratches across various fabric patterns. The dataset was built using the defect labeling and image files from ZJU-Leaper, and experiments were conducted with deep learning image segmentation models including Deeplabv3, SegformerB0, SegformerB1, and Dinov2. Findings The experimental results of this study indicate that the SegformerB1 model achieved the highest performance with an mIOU of 83.61% and a Pixel F1 Score of 81.84%. The SegformerB1 model excelled in sensitivity for detecting fabric defect areas compared to other models. Detailed analysis of its inferences showed accurate predictions of diverse defects, such as stains and fine scratches, within intricated fabric designs.

Intrusion Detection System based on Packet Payload Analysis using Transformer

  • Woo-Seung Park;Gun-Nam Kim;Soo-Jin Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.81-87
    • /
    • 2023
  • Intrusion detection systems that learn metadata of network packets have been proposed recently. However these approaches require time to analyze packets to generate metadata for model learning, and time to pre-process metadata before learning. In addition, models that have learned specific metadata cannot detect intrusion by using original packets flowing into the network as they are. To address the problem, this paper propose a natural language processing-based intrusion detection system that detects intrusions by learning the packet payload as a single sentence without an additional conversion process. To verify the performance of our approach, we utilized the UNSW-NB15 and Transformer models. First, the PCAP files of the dataset were labeled, and then two Transformer (BERT, DistilBERT) models were trained directly in the form of sentences to analyze the detection performance. The experimental results showed that the binary classification accuracy was 99.03% and 99.05%, respectively, which is similar or superior to the detection performance of the techniques proposed in previous studies. Multi-class classification showed better performance with 86.63% and 86.36%, respectively.

Three-phase Transformer Model and Parameter Estimation for ATP

  • Cho Sung-Don
    • Journal of Electrical Engineering and Technology
    • /
    • v.1 no.3
    • /
    • pp.302-307
    • /
    • 2006
  • The purpose of this paper is to develop an improved three-phase transformer model for ATP and parameter estimation methods that can efficiently utilize the limited available information such as factory test reports. In this paper, improved topologically-correct duality-based models are developed for three-phase autotransformers having shell-form cores. The problem in the implementation of detailed models is the lack of complete and reliable data. Therefore, parameter estimation methods are developed to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, ${\lambda}-i$ saturation characteristic and core loss.

Prediction of multipurpose dam inflow utilizing catchment attributes with LSTM and transformer models (유역정보 기반 Transformer및 LSTM을 활용한 다목적댐 일 단위 유입량 예측)

  • Kim, Hyung Ju;Song, Young Hoon;Chung, Eun Sung
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.437-449
    • /
    • 2024
  • Rainfall-runoff prediction studies using deep learning while considering catchment attributes have been gaining attention. In this study, we selected two models: the Transformer model, which is suitable for large-scale data training through the self-attention mechanism, and the LSTM-based multi-state-vector sequence-to-sequence (LSTM-MSV-S2S) model with an encoder-decoder structure. These models were constructed to incorporate catchment attributes and predict the inflow of 10 multi-purpose dam watersheds in South Korea. The experimental design consisted of three training methods: Single-basin Training (ST), Pretraining (PT), and Pretraining-Finetuning (PT-FT). The input data for the models included 10 selected watershed attributes along with meteorological data. The inflow prediction performance was compared based on the training methods. The results showed that the Transformer model outperformed the LSTM-MSV-S2S model when using the PT and PT-FT methods, with the PT-FT method yielding the highest performance. The LSTM-MSV-S2S model showed better performance than the Transformer when using the ST method; however, it showed lower performance when using the PT and PT-FT methods. Additionally, the embedding layer activation vectors and raw catchment attributes were used to cluster watersheds and analyze whether the models learned the similarities between them. The Transformer model demonstrated improved performance among watersheds with similar activation vectors, proving that utilizing information from other pre-trained watersheds enhances the prediction performance. This study compared the suitable models and training methods for each multi-purpose dam and highlighted the necessity of constructing deep learning models using PT and PT-FT methods for domestic watersheds. Furthermore, the results confirmed that the Transformer model outperforms the LSTM-MSV-S2S model when applying PT and PT-FT methods.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.

Multi-Secondary Transformer: A Modeling Technique for Simulation - II

  • Patel, A.;Singh, N.P.;Gupta, L.N.;Raval, B.;Oza, K.;Thakar, A.;Parmar, D.;Dhola, H.;Dave, R.;Gupta, V.;Gajjar, S.;Patel, P.J.;Baruah, U.K.
    • Journal of international Conference on Electrical Machines and Systems
    • /
    • v.3 no.1
    • /
    • pp.78-82
    • /
    • 2014
  • Power Transformers with more than one secondary winding are not uncommon in industrial applications. But new classes of applications where very large number of independent secondaries are used are becoming popular in controlled converters for medium and high voltage applications. Cascade H-bridge medium voltage drives and Pulse Step Modulation (PSM) based high voltage power supplies are such applications. Regulated high voltage power supplies (Fig. 1) with 35-100 kV, 5-10 MW output range with very fast dynamics (${\mu}S$ order) uses such transformers. Such power supplies are widely used in fusion research. Here series connection of isolated voltage sources with conventional switching semiconductor devices is achieved by large number of separate transformers or by single unit of multi-secondary transformer. Naturally, a transformer having numbers of secondary windings (~40) on single core is the preferred solution due to space and cost considerations. For design and simulation analysis of such a power supply, the model of a multi-secondary transformer poses special problem to any circuit analysis software as many simulation softwares provide transformer models with limited number (3-6) of secondary windings. Multi-Secondary transformer models with 3 different schemes are available. A comparison of test results from a practical Multi-secondary transformer with a simulation model using magnetic component is found to describe the behavior closer to observed test results. Earlier models assumed magnetising inductance in a linear loss less core model although in actual it is saturable core made-up of CRGO steel laminations. This article discusses a more detailed representation of flux coupled magnetic model with saturable core properties to simulate actual transformers very close to its observed parameters in test and actual usage.

Performance Evaluation of Efficient Vision Transformers on Embedded Edge Platforms (임베디드 엣지 플랫폼에서의 경량 비전 트랜스포머 성능 평가)

  • Minha Lee;Seongjae Lee;Taehyoun Kim
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.3
    • /
    • pp.89-100
    • /
    • 2023
  • Recently, on-device artificial intelligence (AI) solutions using mobile devices and embedded edge devices have emerged in various fields, such as computer vision, to address network traffic burdens, low-energy operations, and security problems. Although vision transformer deep learning models have outperformed conventional convolutional neural network (CNN) models in computer vision, they require more computations and parameters than CNN models. Thus, they are not directly applicable to embedded edge devices with limited hardware resources. Many researchers have proposed various model compression methods or lightweight architectures for vision transformers; however, there are only a few studies evaluating the effects of model compression techniques of vision transformers on performance. Regarding this problem, this paper presents a performance evaluation of vision transformers on embedded platforms. We investigated the behaviors of three vision transformers: DeiT, LeViT, and MobileViT. Each model performance was evaluated by accuracy and inference time on edge devices using the ImageNet dataset. We assessed the effects of the quantization method applied to the models on latency enhancement and accuracy degradation by profiling the proportion of response time occupied by major operations. In addition, we evaluated the performance of each model on GPU and EdgeTPU-based edge devices. In our experimental results, LeViT showed the best performance in CPU-based edge devices, and DeiT-small showed the highest performance improvement in GPU-based edge devices. In addition, only MobileViT models showed performance improvement on EdgeTPU. Summarizing the analysis results through profiling, the degree of performance improvement of each vision transformer model was highly dependent on the proportion of parts that could be optimized in the target edge device. In summary, to apply vision transformers to on-device AI solutions, either proper operation composition and optimizations specific to target edge devices must be considered.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

A Study on Lightweight Transformer Based Super Resolution Model Using Knowledge Distillation (지식 증류 기법을 사용한 트랜스포머 기반 초해상화 모델 경량화 연구)

  • Dong-hyun Kim;Dong-hun Lee;Aro Kim;Vani Priyanka Galia;Sang-hyo Park
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.333-336
    • /
    • 2023
  • Recently, the transformer model used in natural language processing is also applied to the image super resolution field, showing good performance. However, these transformer based models have a disadvantage that they are difficult to use in small mobile devices because they are complex and have many learning parameters and require high hardware resources. Therefore, in this paper, we propose a knowledge distillation technique that can effectively reduce the size of a transformer based super resolution model. As a result of the experiment, it was confirmed that by applying the proposed technique to the student model with reduced number of transformer blocks, performance similar to or higher than that of the teacher model could be obtained.

Transformer-Based MUM-T Situation Awareness: Agent Status Prediction (트랜스포머 기반 MUM-T 상황인식 기술: 에이전트 상태 예측)

  • Jaeuk Baek;Sungwoo Jun;Kwang-Yong Kim;Chang-Eun Lee
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.4
    • /
    • pp.436-443
    • /
    • 2023
  • With the advancement of robot intelligence, the concept of man and unmanned teaming (MUM-T) has garnered considerable attention in military research. In this paper, we present a transformer-based architecture for predicting the health status of agents, with the help of multi-head attention mechanism to effectively capture the dynamic interaction between friendly and enemy forces. To this end, we first introduce a framework for generating a dataset of battlefield situations. These situations are simulated on a virtual simulator, allowing for a wide range of scenarios without any restrictions on the number of agents, their missions, or their actions. Then, we define the crucial elements for identifying the battlefield, with a specific emphasis on agents' status. The battlefield data is fed into the transformer architecture, with classification headers on top of the transformer encoding layers to categorize health status of agent. We conduct ablation tests to assess the significance of various factors in determining agents' health status in battlefield scenarios. We conduct 3-Fold corss validation and the experimental results demonstrate that our model achieves a prediction accuracy of over 98%. In addition, the performance of our model are compared with that of other models such as convolutional neural network (CNN) and multi layer perceptron (MLP), and the results establish the superiority of our model.