• Title/Summary/Keyword: 언어 학습 모델

Search Result 841, Processing Time 0.022 seconds

Optimization of Fuzzy Learning Machine by Using Particle Swarm Optimization (PSO 알고리즘을 이용한 퍼지 Extreme Learning Machine 최적화)

  • Roh, Seok-Beom;Wang, Jihong;Kim, Yong-Soo;Ahn, Tae-Chon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.26 no.1
    • /
    • pp.87-92
    • /
    • 2016
  • In this paper, optimization technique such as particle swarm optimization was used to optimize the parameters of fuzzy Extreme Learning Machine. While the learning speed of conventional neural networks is very slow, that of Extreme Learning Machine is very fast. Fuzzy Extreme Learning Machine is composed of the Extreme Learning Machine with very fast learning speed and fuzzy logic which can represent the linguistic information of the field experts. The general sigmoid function is used for the activation function of Extreme Learning Machine. However, the activation function of Fuzzy Extreme Learning Machine is the membership function which is defined in the procedure of fuzzy C-Means clustering algorithm. We optimize the parameters of the membership functions by using optimization technique such as Particle Swarm Optimization. In order to validate the classification capability of the proposed classifier, we make several experiments with the various machine learning datas.

Korean Traditional Music Melody Generator using Artificial Intelligence (인공지능을 이용한 국악 멜로디 생성기에 관한 연구)

  • Bae, Jun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.7
    • /
    • pp.869-876
    • /
    • 2021
  • In the field of music, various AI composition methods using machine learning have recently been attempted. However, most of this research has been centered on Western music, and little research has been done on Korean traditional music. Therefore, in this paper, we will create a data set of Korean traditional music, create a melody using three algorithms based on the data set, and compare the results. Three models were selected based on the similarity between language and music, LSTM, Music Transformer and Self Attention. Using each of the three models, a melody generator was modeled and trained to generate melodies. As a result of user evaluation, the Self Attention method showed higher preference than the other methods. Data set is very important in AI composition. For this, a Korean traditional music data set was created, and AI composition was attempted with various algorithms, and this is expected to be helpful in future research on AI composition for Korean traditional music.

MLOps workflow language and platform for time series data anomaly detection

  • Sohn, Jung-Mo;Kim, Su-Min
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.19-27
    • /
    • 2022
  • In this study, we propose a language and platform to describe and manage the MLOps(Machine Learning Operations) workflow for time series data anomaly detection. Time series data is collected in many fields, such as IoT sensors, system performance indicators, and user access. In addition, it is used in many applications such as system monitoring and anomaly detection. In order to perform prediction and anomaly detection of time series data, the MLOps platform that can quickly and flexibly apply the analyzed model to the production environment is required. Thus, we developed Python-based AI/ML Modeling Language (AMML) to easily configure and execute MLOps workflows. Python is widely used in data analysis. The proposed MLOps platform can extract and preprocess time series data from various data sources (R-DB, NoSql DB, Log File, etc.) using AMML and predict it through a deep learning model. To verify the applicability of AMML, the workflow for generating a transformer oil temperature prediction deep learning model was configured with AMML and it was confirmed that the training was performed normally.

ChatGPT-based Software Requirements Engineering (ChatGPT 기반 소프트웨어 요구공학)

  • Jongmyung Choi
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.45-50
    • /
    • 2023
  • In software development, the elicitation and analysis of requirements is a crucial phase, and it involves considerable time and effort due to the involvement of various stakeholders. ChatGPT, having been trained on a diverse array of documents, is a large language model that possesses not only the ability to generate code and perform debugging but also the capability to be utilized in the domain of software analysis and design. This paper proposes a method of requirements engineering that leverages ChatGPT's capabilities for eliciting software requirements, analyzing them to align with system goals, and documenting them in the form of use cases. In software requirements engineering, it suggests that stakeholders, analysts, and ChatGPT should engage in a collaborative model. The process should involve using the outputs of ChatGPT as initial requirements, which are then reviewed and augmented by analysts and stakeholders. As ChatGPT's capability improves, it is anticipated that the accuracy of requirements elicitation and analysis will increase, leading to time and cost savings in the field of software requirements engineering.

Method of Automatically Generating Metadata through Audio Analysis of Video Content (영상 콘텐츠의 오디오 분석을 통한 메타데이터 자동 생성 방법)

  • Sung-Jung Young;Hyo-Gyeong Park;Yeon-Hwi You;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.6
    • /
    • pp.557-561
    • /
    • 2021
  • A meatadata has become an essential element in order to recommend video content to users. However, it is passively generated by video content providers. In the paper, a method for automatically generating metadata was studied in the existing manual metadata input method. In addition to the method of extracting emotion tags in the previous study, a study was conducted on a method for automatically generating metadata for genre and country of production through movie audio. The genre was extracted from the audio spectrogram using the ResNet34 artificial neural network model, a transfer learning model, and the language of the speaker in the movie was detected through speech recognition. Through this, it was possible to confirm the possibility of automatically generating metadata through artificial intelligence.

Research on ITB Contract Terms Classification Model for Risk Management in EPC Projects: Deep Learning-Based PLM Ensemble Techniques (EPC 프로젝트의 위험 관리를 위한 ITB 문서 조항 분류 모델 연구: 딥러닝 기반 PLM 앙상블 기법 활용)

  • Hyunsang Lee;Wonseok Lee;Bogeun Jo;Heejun Lee;Sangjin Oh;Sangwoo You;Maru Nam;Hyunsik Lee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.471-480
    • /
    • 2023
  • The Korean construction order volume in South Korea grew significantly from 91.3 trillion won in public orders in 2013 to a total of 212 trillion won in 2021, particularly in the private sector. As the size of the domestic and overseas markets grew, the scale and complexity of EPC (Engineering, Procurement, Construction) projects increased, and risk management of project management and ITB (Invitation to Bid) documents became a critical issue. The time granted to actual construction companies in the bidding process following the EPC project award is not only limited, but also extremely challenging to review all the risk terms in the ITB document due to manpower and cost issues. Previous research attempted to categorize the risk terms in EPC contract documents and detect them based on AI, but there were limitations to practical use due to problems related to data, such as the limit of labeled data utilization and class imbalance. Therefore, this study aims to develop an AI model that can categorize the contract terms based on the FIDIC Yellow 2017(Federation Internationale Des Ingenieurs-Conseils Contract terms) standard in detail, rather than defining and classifying risk terms like previous research. A multi-text classification function is necessary because the contract terms that need to be reviewed in detail may vary depending on the scale and type of the project. To enhance the performance of the multi-text classification model, we developed the ELECTRA PLM (Pre-trained Language Model) capable of efficiently learning the context of text data from the pre-training stage, and conducted a four-step experiment to validate the performance of the model. As a result, the ensemble version of the self-developed ITB-ELECTRA model and Legal-BERT achieved the best performance with a weighted average F1-Score of 76% in the classification of 57 contract terms.

Automatic Classification and Vocabulary Analysis of Political Bias in News Articles by Using Subword Tokenization (부분 단어 토큰화 기법을 이용한 뉴스 기사 정치적 편향성 자동 분류 및 어휘 분석)

  • Cho, Dan Bi;Lee, Hyun Young;Jung, Won Sup;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2021
  • In the political field of news articles, there are polarized and biased characteristics such as conservative and liberal, which is called political bias. We constructed keyword-based dataset to classify bias of news articles. Most embedding researches represent a sentence with sequence of morphemes. In our work, we expect that the number of unknown tokens will be reduced if the sentences are constituted by subwords that are segmented by the language model. We propose a document embedding model with subword tokenization and apply this model to SVM and feedforward neural network structure to classify the political bias. As a result of comparing the performance of the document embedding model with morphological analysis, the document embedding model with subwords showed the highest accuracy at 78.22%. It was confirmed that the number of unknown tokens was reduced by subword tokenization. Using the best performance embedding model in our bias classification task, we extract the keywords based on politicians. The bias of keywords was verified by the average similarity with the vector of politicians from each political tendency.

Building and Analyzing Panic Disorder Social Media Corpus for Automatic Deep Learning Classification Model (딥러닝 자동 분류 모델을 위한 공황장애 소셜미디어 코퍼스 구축 및 분석)

  • Lee, Soobin;Kim, Seongdeok;Lee, Juhee;Ko, Youngsoo;Song, Min
    • Journal of the Korean Society for information Management
    • /
    • v.38 no.2
    • /
    • pp.153-172
    • /
    • 2021
  • This study is to create a deep learning based classification model to examine the characteristics of panic disorder and to classify the panic disorder tendency literature by the panic disorder corpus constructed for the present study. For this purpose, 5,884 documents of the panic disorder corpus collected from social media were directly annotated based on the mental disease diagnosis manual and were classified into panic disorder-prone and non-panic-disorder documents. Then, TF-IDF scores were calculated and word co-occurrence analysis was performed to analyze the lexical characteristics of the corpus. In addition, the co-occurrence between the symptom frequency measurement and the annotated symptom was calculated to analyze the characteristics of panic disorder symptoms and the relationship between symptoms. We also conducted the performance evaluation for a deep learning based classification model. Three pre-trained models, BERT multi-lingual, KoBERT, and KcBERT, were adopted for classification model, and KcBERT showed the best performance among them. This study demonstrated that it can help early diagnosis and treatment of people suffering from related symptoms by examining the characteristics of panic disorder and expand the field of mental illness research to social media.

A Sentence Reduction Method using Part-of-Speech Information and Templates (품사 정보와 템플릿을 이용한 문장 축소 방법)

  • Lee, Seung-Soo;Yeom, Ki-Won;Park, Ji-Hyung;Cho, Sung-Bae
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.313-324
    • /
    • 2008
  • A sentence reduction is the information compression process which removes extraneous words and phrases and retains basic meaning of the original sentence. Most researches in the sentence reduction have required a large number of lexical and syntactic resources and focused on extracting or removing extraneous constituents such as words, phrases and clauses of the sentence via the complicated parsing process. However, these researches have some problems. First, the lexical resource which can be obtained in loaming data is very limited. Second, it is difficult to reduce the sentence to languages that have no method for reliable syntactic parsing because of an ambiguity and exceptional expression of the sentence. In order to solve these problems, we propose the sentence reduction method which uses templates and POS(part of speech) information without a parsing process. In our proposed method, we create a new sentence using both Sentence Reduction Templates that decide the reduction sentence form and Grammatical POS-based Reduction Rules that compose the grammatical sentence structure. In addition, We use Viterbi algorithms at HMM(Hidden Markov Models) to avoid the exponential calculation problem which occurs under applying to Sentence Reduction Templates. Finally, our experiments show that the proposed method achieves acceptable results in comparison to the previous sentence reduction methods.

KOMUChat: Korean Online Community Dialogue Dataset for AI Learning (KOMUChat : 인공지능 학습을 위한 온라인 커뮤니티 대화 데이터셋 연구)

  • YongSang Yoo;MinHwa Jung;SeungMin Lee;Min Song
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.219-240
    • /
    • 2023
  • Conversational AI which allows users to interact with satisfaction is a long-standing research topic. To develop conversational AI, it is necessary to build training data that reflects real conversations between people, but current Korean datasets are not in question-answer format or use honorifics, making it difficult for users to feel closeness. In this paper, we propose a conversation dataset (KOMUChat) consisting of 30,767 question-answer sentence pairs collected from online communities. The question-answer pairs were collected from post titles and first comments of love and relationship counsel boards used by men and women. In addition, we removed abuse records through automatic and manual cleansing to build high quality dataset. To verify the validity of KOMUChat, we compared and analyzed the result of generative language model learning KOMUChat and benchmark dataset. The results showed that our dataset outperformed the benchmark dataset in terms of answer appropriateness, user satisfaction, and fulfillment of conversational AI goals. The dataset is the largest open-source single turn text data presented so far and it has the significance of building a more friendly Korean dataset by reflecting the text styles of the online community.