• Title/Summary/Keyword: Pretrained

Search Result 96, Processing Time 0.029 seconds

Alzheimer's disease recognition from spontaneous speech using large language models

  • Jeong-Uk Bang;Seung-Hoon Han;Byung-Ok Kang
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.96-105
    • /
    • 2024
  • We propose a method to automatically predict Alzheimer's disease from speech data using the ChatGPT large language model. Alzheimer's disease patients often exhibit distinctive characteristics when describing images, such as difficulties in recalling words, grammar errors, repetitive language, and incoherent narratives. For prediction, we initially employ a speech recognition system to transcribe participants' speech into text. We then gather opinions by inputting the transcribed text into ChatGPT as well as a prompt designed to solicit fluency evaluations. Subsequently, we extract embeddings from the speech, text, and opinions by the pretrained models. Finally, we use a classifier consisting of transformer blocks and linear layers to identify participants with this type of dementia. Experiments are conducted using the extensively used ADReSSo dataset. The results yield a maximum accuracy of 87.3% when speech, text, and opinions are used in conjunction. This finding suggests the potential of leveraging evaluation feedback from language models to address challenges in Alzheimer's disease recognition.

On the Significance of Domain-Specific Pretrained Language Models for Log Anomaly Detection (로그 이상 탐지를 위한 도메인별 사전 훈련 언어 모델 중요성 연구)

  • Lelisa Adeba Jilcha;Deuk-Hun Kim;Jin Kwak
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.337-340
    • /
    • 2024
  • Pretrained language models (PLMs) are extensively utilized to enhance the performance of log anomaly detection systems. Their effectiveness lies in their capacity to extract valuable semantic information from logs, thereby strengthening the detection performance. Nonetheless, challenges arise due to discrepancies in the distribution of log messages, hindering the development of robust and generalizable detection systems. This study investigates the structural and distributional variation across various log message datasets, underscoring the crucial role of domain-specific PLMs in overcoming the said challenge and devising robust and generalizable solutions.

E-commerce data based Sentiment Analysis Model Implementation using Natural Language Processing Model (자연어처리 모델을 이용한 이커머스 데이터 기반 감성 분석 모델 구축)

  • Choi, Jun-Young;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.11
    • /
    • pp.33-39
    • /
    • 2020
  • In the field of Natural Language Processing, Various research such as Translation, POS Tagging, Q&A, and Sentiment Analysis are globally being carried out. Sentiment Analysis shows high classification performance for English single-domain datasets by pretrained sentence embedding models. In this thesis, the classification performance is compared by Korean E-commerce online dataset with various domain attributes and 6 Neural-Net models are built as BOW (Bag Of Word), LSTM[1], Attention, CNN[2], ELMo[3], and BERT(KoBERT)[4]. It has been confirmed that the performance of pretrained sentence embedding models are higher than word embedding models. In addition, practical Neural-Net model composition is proposed after comparing classification performance on dataset with 17 categories. Furthermore, the way of compressing sentence embedding model is mentioned as future work, considering inference time against model capacity on real-time service.

Reverting Gene Expression Pattern of Cancer into Normal-Like Using Cycle-Consistent Adversarial Network

  • Lee, Chan-hee;Ahn, TaeJin
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.275-283
    • /
    • 2018
  • Cancer show distinct pattern of gene expression when it is compared to normal. This difference results malignant characteristic of cancer. Many cancer drugs are targeting this difference so that it can selectively kill cancer cells. One of the recent demand for personalized treating cancer is retrieving normal tissue from a patient so that the gene expression difference between cancer and normal be assessed. However, in most clinical situation it is hard to retrieve normal tissue from a patient. This is because biopsy of normal tissues may cause damage to the organ function or a risk of infection or side effect what a patient to take. Thus, there is a challenge to estimate normal cell's gene expression where cancers are originated from without taking additional biopsy. In this paper, we propose in-silico based prediction of normal cell's gene expression from gene expression data of a tumor sample. We call this challenge as reverting the cancer into normal. We divided this challenge into two parts. The first part is making a generator that is able to fool a pretrained discriminator. Pretrained discriminator is from the training of public data (9,601 cancers, 7,240 normals) which shows 0.997 of accuracy to discriminate if a given gene expression pattern is cancer or normal. Deceiving this pretrained discriminator means our method is capable of generating very normal-like gene expression data. The second part of the challenge is to address whether generated normal is similar to true reverse form of the input cancer data. We used, cycle-consistent adversarial networks to approach our challenges, since this network is capable of translating one domain to the other while maintaining original domain's feature and at the same time adding the new domain's feature. We evaluated that, if we put cancer data into a cycle-consistent adversarial network, it could retain most of the information from the input (cancer) and at the same time change the data into normal. We also evaluated if this generated gene expression of normal tissue would be the biological reverse form of the gene expression of cancer used as an input.

Recent Automatic Post Editing Research (최신 기계번역 사후 교정 연구)

  • Moon, Hyeonseok;Park, Chanjun;Eo, Sugyeong;Seo, Jaehyung;Lim, Heuiseok
    • Journal of Digital Convergence
    • /
    • v.19 no.7
    • /
    • pp.199-208
    • /
    • 2021
  • Automatic Post Editing(APE) is the study that automatically correcting errors included in the machine translated sentences. The goal of APE task is to generate error correcting models that improve translation quality, regardless of the translation system. For training these models, source sentence, machine translation, and post edit, which is manually edited by human translator, are utilized. Especially in the recent APE research, multilingual pretrained language models are being adopted, prior to the training by APE data. This study deals with multilingual pretrained language models adopted to the latest APE researches, and the specific application method for each APE study. Furthermore, based on the current research trend, we propose future research directions utilizing translation model or mBART model.

Korean and Multilingual Language Models Study for Cross-Lingual Post-Training (XPT) (Cross-Lingual Post-Training (XPT)을 위한 한국어 및 다국어 언어모델 연구)

  • Son, Suhyune;Park, Chanjun;Lee, Jungseob;Shim, Midan;Lee, Chanhee;Park, Kinam;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.3
    • /
    • pp.77-89
    • /
    • 2022
  • It has been proven through many previous researches that the pretrained language model with a large corpus helps improve performance in various natural language processing tasks. However, there is a limit to building a large-capacity corpus for training in a language environment where resources are scarce. Using the Cross-lingual Post-Training (XPT) method, we analyze the method's efficiency in Korean, which is a low resource language. XPT selectively reuses the English pretrained language model parameters, which is a high resource and uses an adaptation layer to learn the relationship between the two languages. This confirmed that only a small amount of the target language dataset in the relationship extraction shows better performance than the target pretrained language model. In addition, we analyze the characteristics of each model on the Korean language model and the Korean multilingual model disclosed by domestic and foreign researchers and companies.

Layer-wise hint-based training for knowledge transfer in a teacher-student framework

  • Bae, Ji-Hoon;Yim, Junho;Kim, Nae-Soo;Pyo, Cheol-Sig;Kim, Junmo
    • ETRI Journal
    • /
    • v.41 no.2
    • /
    • pp.242-253
    • /
    • 2019
  • We devise a layer-wise hint training method to improve the existing hint-based knowledge distillation (KD) training approach, which is employed for knowledge transfer in a teacher-student framework using a residual network (ResNet). To achieve this objective, the proposed method first iteratively trains the student ResNet and incrementally employs hint-based information extracted from the pretrained teacher ResNet containing several hint and guided layers. Next, typical softening factor-based KD training is performed using the previously estimated hint-based information. We compare the recognition accuracy of the proposed approach with that of KD training without hints, hint-based KD training, and ResNet-based layer-wise pretraining using reliable datasets, including CIFAR-10, CIFAR-100, and MNIST. When using the selected multiple hint-based information items and their layer-wise transfer in the proposed method, the trained student ResNet more accurately reflects the pretrained teacher ResNet's rich information than the baseline training methods, for all the benchmark datasets we consider in this study.

Dialog-based multi-item recommendation using automatic evaluation

  • Euisok Chung;Hyun Woo Kim;Byunghyun Yoo;Ran Han;Jeongmin Yang;Hwa Jeon Song
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.277-289
    • /
    • 2024
  • In this paper, we describe a neural network-based application that recommends multiple items using dialog context input and simultaneously outputs a response sentence. Further, we describe a multi-item recommendation by specifying it as a set of clothing recommendations. For this, a multimodal fusion approach that can process both cloth-related text and images is required. We also examine achieving the requirements of downstream models using a pretrained language model. Moreover, we propose a gate-based multimodal fusion and multiprompt learning based on a pretrained language model. Specifically, we propose an automatic evaluation technique to solve the one-to-many mapping problem of multi-item recommendations. A fashion-domain multimodal dataset based on Koreans is constructed and tested. Various experimental environment settings are verified using an automatic evaluation method. The results show that our proposed method can be used to obtain confidence scores for multi-item recommendation results, which is different from traditional accuracy evaluation.

Improving Chest X-ray Image Classification via Integration of Self-Supervised Learning and Machine Learning Algorithms

  • Tri-Thuc Vo;Thanh-Nghi Do
    • Journal of information and communication convergence engineering
    • /
    • v.22 no.2
    • /
    • pp.165-171
    • /
    • 2024
  • In this study, we present a novel approach for enhancing chest X-ray image classification (normal, Covid-19, edema, mass nodules, and pneumothorax) by combining contrastive learning and machine learning algorithms. A vast amount of unlabeled data was leveraged to learn representations so that data efficiency is improved as a means of addressing the limited availability of labeled data in X-ray images. Our approach involves training classification algorithms using the extracted features from a linear fine-tuned Momentum Contrast (MoCo) model. The MoCo architecture with a Resnet34, Resnet50, or Resnet101 backbone is trained to learn features from unlabeled data. Instead of only fine-tuning the linear classifier layer on the MoCopretrained model, we propose training nonlinear classifiers as substitutes for softmax in deep networks. The empirical results show that while the linear fine-tuned ImageNet-pretrained models achieved the highest accuracy of only 82.9% and the linear fine-tuned MoCo-pretrained models an increased highest accuracy of 84.8%, our proposed method offered a significant improvement and achieved the highest accuracy of 87.9%.

Zero-shot Lexical Semantics based on Perplexity of Pretrained Language Models (사전학습 언어모델의 Perplexity에 기반한 Zero-shot 어휘 의미 모델)

  • Choi, Heyong-Jun;Na, Seung-Hoon
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.473-475
    • /
    • 2021
  • 유의어 추천을 구현하기 위해서는 각 단어 사이의 유사도를 계산하는 것이 필수적이다. 하지만, 기존의 단어간 유사도를 계산하는 여러 방법들은 데이터셋에 등장하지 않은 단어에 대해 유사도를 계산 할 수 없다. 이 논문에서는 이를 해결하기 위해 언어모델의 PPL을 활용하여 단어간 유사도를 계산하였고, 이를 통해 유의어를 추천했을 때 MRR 41.31%의 성능을 확인했다.

  • PDF