• Title/Summary/Keyword: 미세 조정

Search Result 316, Processing Time 0.029 seconds

Fine tuning of wavelength for the intenrnal wavelength locker module at 50 GHz composed of the photo-diode array black with the multi-channel tunable laser diodes in DWDM application (DWDM용 다채널 파장 가변 레이저 다이오드 모듈을 위한 다수개의 광 수신 소자를 갖는 50 GHz 내장형 파장 안정화 모듈의 파장 미세 조정)

  • 박흥우;윤호경;최병석;이종현;최광성;엄용성;문종태
    • Korean Journal of Optics and Photonics
    • /
    • v.13 no.5
    • /
    • pp.384-389
    • /
    • 2002
  • A new idea of the wavelength locking module for DWDM application was investigated in the present research. Only one etalon photo-diode is generally used in the internal/external wavelength locking system. For the internal wavelength locking module with 50 GHz applications, an algle tuning method of the etalon commonly applied. However, the alignment process of the etalon with the angle tuning method is limited because the lock performance is extremely sensitive accoriding to the change of the tilting angle. In an optical viewpoint, the alignment tolerance of the locker module with the etalon PD array block was good, and the precise tuning of the wavelength was possible. The characteristics of free spectral range (FSR) and peak shift of wavelength according to the tilting angle with the locker module was investigated. For the present module, the optimized initial tilting angle was experimentally obtained.

FinBERT Fine-Tuning for Sentiment Analysis: Exploring the Effectiveness of Datasets and Hyperparameters (감성 분석을 위한 FinBERT 미세 조정: 데이터 세트와 하이퍼파라미터의 효과성 탐구)

  • Jae Heon Kim;Hui Do Jung;Beakcheol Jang
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.127-135
    • /
    • 2023
  • This research paper explores the application of FinBERT, a variational BERT-based model pre-trained on financial domain, for sentiment analysis in the financial domain while focusing on the process of identifying suitable training data and hyperparameters. Our goal is to offer a comprehensive guide on effectively utilizing the FinBERT model for accurate sentiment analysis by employing various datasets and fine-tuning hyperparameters. We outline the architecture and workflow of the proposed approach for fine-tuning the FinBERT model in this study, emphasizing the performance of various datasets and hyperparameters for sentiment analysis tasks. Additionally, we verify the reliability of GPT-3 as a suitable annotator by using it for sentiment labeling tasks. Our results show that the fine-tuned FinBERT model excels across a range of datasets and that the optimal combination is a learning rate of 5e-5 and a batch size of 64, which perform consistently well across all datasets. Furthermore, based on the significant performance improvement of the FinBERT model with our Twitter data in general domain compared to our news data in general domain, we also express uncertainty about the model being further pre-trained only on financial news data. We simplify the complex process of determining the optimal approach to the FinBERT model and provide guidelines for selecting additional training datasets and hyperparameters within the fine-tuning process of financial sentiment analysis models.

Privacy-Preserving Language Model Fine-Tuning Using Offsite Tuning (프라이버시 보호를 위한 오프사이트 튜닝 기반 언어모델 미세 조정 방법론)

  • Jinmyung Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.165-184
    • /
    • 2023
  • Recently, Deep learning analysis of unstructured text data using language models, such as Google's BERT and OpenAI's GPT has shown remarkable results in various applications. Most language models are used to learn generalized linguistic information from pre-training data and then update their weights for downstream tasks through a fine-tuning process. However, some concerns have been raised that privacy may be violated in the process of using these language models, i.e., data privacy may be violated when data owner provides large amounts of data to the model owner to perform fine-tuning of the language model. Conversely, when the model owner discloses the entire model to the data owner, the structure and weights of the model are disclosed, which may violate the privacy of the model. The concept of offsite tuning has been recently proposed to perform fine-tuning of language models while protecting privacy in such situations. But the study has a limitation that it does not provide a concrete way to apply the proposed methodology to text classification models. In this study, we propose a concrete method to apply offsite tuning with an additional classifier to protect the privacy of the model and data when performing multi-classification fine-tuning on Korean documents. To evaluate the performance of the proposed methodology, we conducted experiments on about 200,000 Korean documents from five major fields, ICT, electrical, electronic, mechanical, and medical, provided by AIHub, and found that the proposed plug-in model outperforms the zero-shot model and the offsite model in terms of classification accuracy.

Microdisplacement Control Microactuator (미세거리 조정 마이크로 액투에이터)

  • Park, Se-Kwang
    • Proceedings of the KIEE Conference
    • /
    • 1989.07a
    • /
    • pp.345-348
    • /
    • 1989
  • A small linear incremental device, which is called a microworm, is introduced, and this paper explains working principle, design considerations and theoritcal force analysis of the microworm. A fluid control microvalve and a piezoelectric motor as an application of the microworm are explored for feasibility.

  • PDF

A Study on Automatic Alignment System based on Object Detection and Homography Estimation (객체 탐지 및 호모그래피 추정을 이용한 안저영상 자동 조정체계 시스템 연구)

  • In, Sanggyu;Beom, Junghyun;Choo, Hyunseung
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.401-403
    • /
    • 2021
  • 본 시스템은 같은 환자로부터 촬영한 기존 안저영상과 초광각 안저영상을 Paired Dataset으로 지니고 있으며, 영상의 크기 및 해상도를 똑같이 맞추고, 황반부와 신경유두 및 혈관의 위치를 미세조정하는 과정을 자동화하는 것을 목표로 하고 있다. 이 과정은 황반부를 중심으로 하여 영상을 잘라내어 이미지의 크기를 맞추는 과정(Scaling)과, 황반부를 중심으로 잘라낸 한 쌍의 영상을 포개었을 때 황반부, 신경 유두, 혈관 등의 위치가 동일하도록 미세조정하는 과정(Warping)이 있다. Scaling Stage에선 기존 안저영상과 초광각 안저영상의 촬영범위가 현저하게 차이나기 때문에, 황반변성 부위를 잘 나타내도록 사전에 잘라낼 필요가 있으며, 이를 신경유두의 Object Detection을 활용할 예정이다. Warping Stage에선 동일한 위치에 같은 황반변성 정보가 내포되어야 하므로 규격조정 및 위치조정 과정이 필수적이며, 이후 안저영상 내의 특징들을 매칭하는 작업을 하기 위해 회전, 회절, 변환 작업 등이 이루어지며, 이는 Homography Estimation을 통하여 이미지 변환 matrix를 구하는 방법으로 진행된다. 자동조정된 안저영상 데이터는 추후에 GAN을 이용한 안저영상 생성모델을 위한 학습데이터로 이용할 예정이며, 현재로선 2500쌍의 데이터를 대상으로 실험을 진행중이지만, 최종적으로 3만 쌍의 안저영상 데이터를 목표로 하고 있다.

Genetic Algorithm with the Local Fine-Tuning Mechanism (유전자 알고리즘을 위한 지역적 미세 조정 메카니즘)

  • 임영희
    • Korean Journal of Cognitive Science
    • /
    • v.4 no.2
    • /
    • pp.181-200
    • /
    • 1994
  • In the learning phase of multilyer feedforword neural network,there are problems such that local minimum,learning praralysis and slow learning speed when backpropagation algorithm used.To overcome these problems, the genetic algorithm has been used as learing method in the multilayer feedforword neural network instead of backpropagation algorithm.However,because the genetic algorith, does not have any mechanism for fine-tuned local search used in backpropagation method,it takes more time that the genetic algorithm converges to a global optimal solution.In this paper,we suggest a new GA-BP method which provides a fine-tunes local search to the genetic algorithm.GA-BP method uses gradient descent method as one of genetic algorithm's operators such as mutation or crossover.To show the effciency of the developed method,we applied it to the 3-parity bit problem with analysis.

A Study on Fine-Tuning and Transfer Learning to Construct Binary Sentiment Classification Model in Korean Text (한글 텍스트 감정 이진 분류 모델 생성을 위한 미세 조정과 전이학습에 관한 연구)

  • JongSoo Kim
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.28 no.5
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, generative models based on the Transformer architecture, such as ChatGPT, have been gaining significant attention. The Transformer architecture has been applied to various neural network models, including Google's BERT(Bidirectional Encoder Representations from Transformers) sentence generation model. In this paper, a method is proposed to create a text binary classification model for determining whether a comment on Korean movie review is positive or negative. To accomplish this, a pre-trained multilingual BERT sentence generation model is fine-tuned and transfer learned using a new Korean training dataset. To achieve this, a pre-trained BERT-Base model for multilingual sentence generation with 104 languages, 12 layers, 768 hidden, 12 attention heads, and 110M parameters is used. To change the pre-trained BERT-Base model into a text classification model, the input and output layers were fine-tuned, resulting in the creation of a new model with 178 million parameters. Using the fine-tuned model, with a maximum word count of 128, a batch size of 16, and 5 epochs, transfer learning is conducted with 10,000 training data and 5,000 testing data. A text sentiment binary classification model for Korean movie review with an accuracy of 0.9582, a loss of 0.1177, and an F1 score of 0.81 has been created. As a result of performing transfer learning with a dataset five times larger, a model with an accuracy of 0.9562, a loss of 0.1202, and an F1 score of 0.86 has been generated.

KMTNet 18k Mosaic CCD Camera System Performance Improvement and Maintenance (외계행성 탐색시스템 18k 모자이크 CCD 카메라 시스템 성능개선 및 유지보수)

  • Cha, Sang-Mok;Lee, Chung-Uk;Kim, Seung-Lee;Lee, Yongseok;Atwood, Bruce;Lim, Beomdu;O'Brien, Thomas P.;Jin, Ho
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.43 no.1
    • /
    • pp.40.1-40.1
    • /
    • 2018
  • 외계행성 탐색시스템 18k 모자이크 CCD 카메라는 4개의 9k CCD로 구성되며 총 32개 채널의 영상영역과 리드아웃 회로를 가진다. 관측 영상에는 각 영상영역에 대한 오버스캔(overscan) 영역이 포함되는데, 영상 신호에 의한 오버스캔 영역의 바이어스(bias) 교란을 최소화하기 위해 리드아웃 회로의 인버팅 앰프에 대한 Common Mode Rejection Ratio(CMRR)를 미세 조정하였다. 그 결과 세 사이트의 평균 CMRR이 55 dB에서 73 dB로 향상되었고, 기존에는 영상 신호에 따른 오버스캔 바이어스 레벨의 선형적 관계가 약 2/1,000의 기울기를 가졌으나 조정 후에는 약 2/10,000로 바이어스 오차가 줄어들었다. CCD 리드아웃 회로의 미세조정과 클락(clock) 개선을 통해 물결무늬 잡음 제거 및 읽기 잡음 감소가 이루어졌으며, 향후의 추가적인 바이어스 안정화와 크로스톡 개선 방안이 검토되고 있다. 카메라 전자부 조정 과정 및 결과와 더불어, 카메라 듀어와 부대장비 유지보수, Polycold CryoTiger 냉각기 운영 및 개선 관련 노하우도 함께 발표한다.

  • PDF