• 제목/요약/키워드: large language model

검색결과 282건 처리시간 0.025초

Assessment of Improving SWAT Weather Input Data using Basic Spatial Interpolation Method

  • Felix, Micah Lourdes;Choi, Mikyoung;Zhang, Ning;Jung, Kwansue
    • 한국수자원학회:학술대회논문집
    • /
    • 한국수자원학회 2022년도 학술발표회
    • /
    • pp.368-368
    • /
    • 2022
  • The Soil and Water Assessment Tool (SWAT) has been widely used to simulate the long-term hydrological conditions of a catchment. Two output variables, outflow and sediment yield have been widely investigated in the field of water resources management, especially in determining the conditions of ungauged subbasins. The presence of missing data in weather input data can cause poor representation of the climate conditions in a catchment especially for large or mountainous catchments. Therefore, in this study, a custom module was developed and evaluated to determine the efficiency of utilizing basic spatial interpolation methods in the estimation of weather input data. The module has been written in Python language and can be considered as a pre-processing module prior to using the SWAT model. The results of this study suggests that the utilization of the proposed pre-processing module can improve the simulation results for both outflow and sediment yield in a catchment, even in the presence of missing data.

  • PDF

기초 프로그래밍 과목에서의 ChatGPT의 코딩 역량 분석 (Analysis of ChatGPT's Coding Capabilities in Foundational Programming Courses)

  • 나재호
    • 공학교육연구
    • /
    • 제26권6호
    • /
    • pp.71-78
    • /
    • 2023
  • ChatGPT significantly broadens the application of artificial intelligence (AI) services across various domains, with one of its primary functions being assistance in programming and coding. Nevertheless, due to the short history of ChatGPT, there have been few studies analyzing its coding capabilities in Korean higher education. In this paper, we evaluate it using exam questions from three foundational programming courses at S University. According to the experimental results, ChatGPT successfully generated Python, C, and JAVA programs, and the code quality is on par with that of high-achieving students. The powerful coding capabilities of ChatGPT imply the need for a strict prohibition of its usage in coding tests; however, it also suggests significant potential for enhancing practical exercises in the educational aspect.

실내 공기질 평가를 위한 2구획 모델의 개발 (Development of the Two-Zone Model to Estimate the Air Quality in Indoor Environments)

  • 조석호;양성환;이봉헌;정성욱;이병호
    • 한국환경과학회지
    • /
    • 제7권6호
    • /
    • pp.745-751
    • /
    • 1998
  • The well-mixed room model has been traditionally used to predict the concentrations of contaminants in indoor environments. However, this is inappropriate because the flow fields in many indoor environments distribute contaminants non-uniformly, due to imperfect air mixing. Thus, some means used to describe an imperfectly mixed room are needed. The simplest model that accounts for imperfect air mixing is a two-zone model. Therefore, this study on development of computer program far the two-zone model is carried out to propose techniques of estimating the concentration of contaminants in the room. To do this, an important consideration is to divide a room into two-zone, i.e. the lower and upper zone assuming that the air and contaminants are well mixed within each zone. And between the zones the air recirculation is characterized through the air exchange parameter. By this basic assumption, the equations for the conservation of mass are derived for each zone. These equations are solved by using the computational technique. The language used to develope the program is a VISUAL BASIC. The value of air exchange coefficient($f_12$) is the most difficult to forecast when the concentrations of contaminants in an imperfectly mixed room are estimated by the two-zone model. But, as the value of $f_12$ increases, the air exchange between each zone increases. When the value of $f_12$ is approximately 15, the concentrations in both zone approach each other, and the entire room may be approximately treated as a single well-mixed room. Therefore, this study is available for designing of the ventilation to improve the air quality of indoor environments. Also, the two-zone model produces the theoretical base which may be extended to the theory for the multi-zone model, that will be contributed to estimate the air pollution in large enclosures, such as shopping malls, atria buildings, atria terminals, and covered sports stadia.

  • PDF

한국어 언어모델 파인튜닝을 통한 협찬 블로그 텍스트 생성 (Generating Sponsored Blog Texts through Fine-Tuning of Korean LLMs)

  • 김보경;변재연;차경애
    • 한국산업정보학회논문지
    • /
    • 제29권3호
    • /
    • pp.1-12
    • /
    • 2024
  • 본 논문에서는 대규모 한국어 언어모델인 KoAlpaca를 파인튜닝하고 이를 이용한 블로그 텍스트 생성 시스템을 구현하였다. 소셜 미디어 플랫폼의 블로그는 기업 마케팅 수단으로 널리 활용된다. 수집된 협찬 블로그 텍스트의 감정 분석과 정제를 통한 긍정 리뷰의 학습 데이터를 구축하고 KoAlpaca 학습의 경량화를 위한 QLoRA를 적용하였다. QLoRA는 학습에 필요한 메모리 사용량을 크게 줄이는 파인튜닝 접근법으로 파라미터 크기 12.8B 경우의 실험 환경에서 LoRA 대비 최대 약 58.8%의 메모리 사용량 감소를 확인하였다. 파인튜닝 모델의 생성 성능 평가를 위해서 학습 데이터에 포함되지 않은 100개의 입력으로 생성한 텍스트는 사전학습 모델에 비해서 평균적으로 두배 이상의 단어 수를 생성하였으며 긍정 감정의 텍스트 역시 두 배 이상으로 나타났다. 정성적 생성 성능 평가를 위한 설문조사에서 파인튜닝 모델의 생성 결과가 제시된 주제에 더 잘 부합한다는 응답이 평균 77.5%로 나타났다. 이를 통해서 본 논문의 협찬물에 대한 긍정 리뷰 생성 언어모델은 콘텐츠 제작을 위한 시간 관리의 효율성을 높이고 일관된 마케팅 효과를 보장하는 콘텐츠 제작이 가능함을 보였다. 향후 사전학습 모델의 생성 요소에 의해서 긍정 리뷰의 범주에서 벗어나는 생성 결과를 감소시키기 위해서 학습 데이터의 증강을 활용한 파인튜닝을 진행할 예정이다.

Structural analysis of a prestressed segmented girder using contact elements in ANSYS

  • Lazzari, Paula M.;Filho, Americo Campos;Lazzari, Bruna M.;Pacheco, Alexandre R.
    • Computers and Concrete
    • /
    • 제20권3호
    • /
    • pp.319-327
    • /
    • 2017
  • Studying the structural behavior of prestressed segmented girders is quite important due to the large use this type of solution in viaducts and bridges. Thus, this work presents a nonlinear three-dimensional structural analysis of an externally prestressed segmented concrete girder through the Finite Element Method (FEM), using a customized ANSYS platform, version 14.5. Aiming the minimization of the computational effort by using the lowest number of finite elements, a new viscoelastoplastic material model has been implemented for the structural concrete with the UPF customization tool of ANSYS, adding new subroutines, written in FORTRAN programming language, to the main program. This model takes into consideration the cracking of concrete in its formulation, being based on fib Model Code 2010, which uses Ottosen rupture surface as the rupture criterion. By implementing this new material model, it was possible to use the three-dimensional 20-node quadratic element SOLID186 to model the concrete. Upon validation of the model, an externally prestressed segmented box concrete girder that was originally lab tested by Aparicio et al. (2002) has been computationally simulated. In the discretization of the structure, in addition to element SOLID186 for the concrete, unidimensional element LINK180 has been used to model the prestressing tendons, as well as contact elements CONTA174 and TARGE170 to simulate the dry joints along the segmented girder. Stresses in the concrete and in the prestressing tendons are assessed, as well as joint openings and load versus deflection diagrams. A comparison between numerical and experimental data is also presented, showing a good agreement.

CSCW 환경에 기반한 요구공학 프로세스 모델 설계 (A Design of Requirement Engineering Process Model Based on CSCW Enviroment)

  • 황만수;이원우;류성렬
    • 한국정보처리학회논문지
    • /
    • 제7권10호
    • /
    • pp.3075-3085
    • /
    • 2000
  • 소프트웨어 개발과 운영이 분산화, 대형화됨에 따라 정확하고 완전한 요구사항의 추출과 명세는 시스템의 가장 중요한 요소가 되고 있다. 또한 인터넷을 통한 공동작업 환경에서 계속적인 시스템 변경요청은 더욱 효율적인 요구사항 관리를 필요로 한다. 본 논문에서는 이러한 공동작업 환경에서 자연어 기반 요구사항 명세와 관리의 효율성을 향상시키기 위한 요구사항 명세구조와 기법 등을 정의하고, 요구공학 활동과 주기를 바탕으로 하는 요구공학 프로세서와 환경을 제안한다. 그래서 CSCW(Computer Supported Cooperative Work) 환경에서 요구사항을 정확하게 추출하고 효율적으로 관리하며 분석단계로 자연스러운 전이가 가능하도록 한다.

  • PDF

RDNN: Rumor Detection Neural Network for Veracity Analysis in Social Media Text

  • SuthanthiraDevi, P;Karthika, S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권12호
    • /
    • pp.3868-3888
    • /
    • 2022
  • A widely used social networking service like Twitter has the ability to disseminate information to large groups of people even during a pandemic. At the same time, it is a convenient medium to share irrelevant and unverified information online and poses a potential threat to society. In this research, conventional machine learning algorithms are analyzed to classify the data as either non-rumor data or rumor data. Machine learning techniques have limited tuning capability and make decisions based on their learning. To tackle this problem the authors propose a deep learning-based Rumor Detection Neural Network model to predict the rumor tweet in real-world events. This model comprises three layers, AttCNN layer is used to extract local and position invariant features from the data, AttBi-LSTM layer to extract important semantic or contextual information and HPOOL to combine the down sampling patches of the input feature maps from the average and maximum pooling layers. A dataset from Kaggle and ground dataset #gaja are used to train the proposed Rumor Detection Neural Network to determine the veracity of the rumor. The experimental results of the RDNN Classifier demonstrate an accuracy of 93.24% and 95.41% in identifying rumor tweets in real-time events.

인공지능 리터러시 신장을 위한 인공지능 사고 기반 교육 프로그램 개발 및 효과 (Development and Effectiveness of an AI Thinking-based Education Program for Enhancing AI Literacy)

  • 이주영;원용호;신윤희
    • 공학교육연구
    • /
    • 제26권3호
    • /
    • pp.12-19
    • /
    • 2023
  • The purpose of this study is to develop the Artificial Intelligence thinking-based education program for improving AI literacy and verify its effectiveness for beginner. This program consists of 17 sessions, was designed according to the "ABCDE" model and is a project-based program. This program was conducted on 51 first-year middle school students and 36 respondents excluding missing values were analyzed in R language. The effect of this program on ethics, understanding, social competency, execution plan, data literacy, and problem solving of AI literacy is statistically significant and has very large practical significance. According to the result of this study, this program provided learners experiencing Artificial Intelligence education for the first time with Artificial Intelligence concepts and principles, collection and analysis of information, and problem-solving processes through application in real life, and served as an opportunity to enhance AI literacy. In addition, education program to enhance AI literacy should be designed based on AI thinking.

Application of ChatGPT text extraction model in analyzing rhetorical principles of COVID-19 pandemic information on a question-and-answer community

  • Hyunwoo Moon;Beom Jun Bae;Sangwon Bae
    • International journal of advanced smart convergence
    • /
    • 제13권2호
    • /
    • pp.205-213
    • /
    • 2024
  • This study uses a large language model (LLM) to identify Aristotle's rhetorical principles (ethos, pathos, and logos) in COVID-19 information on Naver Knowledge-iN, South Korea's leading question-and-answer community. The research analyzed the differences of these rhetorical elements in the most upvoted answers with random answers. A total of 193 answer pairs were randomly selected, with 135 pairs for training and 58 for testing. These answers were then coded in line with the rhetorical principles to refine GPT 3.5-based models. The models achieved F1 scores of .88 (ethos), .81 (pathos), and .69 (logos). Subsequent analysis of 128 new answer pairs revealed that logos, particularly factual information and logical reasoning, was more frequently used in the most upvoted answers than the random answers, whereas there were no differences in ethos and pathos between the answer groups. The results suggest that health information consumers value information including logos while ethos and pathos were not associated with consumers' preference for health information. By utilizing an LLM for the analysis of persuasive content, which has been typically conducted manually with much labor and time, this study not only demonstrates the feasibility of using an LLM for latent content but also contributes to expanding the horizon in the field of AI text extraction.

딥러닝을 이용한 한국어 Head-Tail 토큰화 기법과 품사 태깅 (Korean Head-Tail Tokenization and Part-of-Speech Tagging by using Deep Learning)

  • 김정민;강승식;김혁만
    • 대한임베디드공학회논문지
    • /
    • 제17권4호
    • /
    • pp.199-208
    • /
    • 2022
  • Korean is an agglutinative language, and one or more morphemes are combined to form a single word. Part-of-speech tagging method separates each morpheme from a word and attaches a part-of-speech tag. In this study, we propose a new Korean part-of-speech tagging method based on the Head-Tail tokenization technique that divides a word into a lexical morpheme part and a grammatical morpheme part without decomposing compound words. In this method, the Head-Tail is divided by the syllable boundary without restoring irregular deformation or abbreviated syllables. Korean part-of-speech tagger was implemented using the Head-Tail tokenization and deep learning technique. In order to solve the problem that a large number of complex tags are generated due to the segmented tags and the tagging accuracy is low, we reduced the number of tags to a complex tag composed of large classification tags, and as a result, we improved the tagging accuracy. The performance of the Head-Tail part-of-speech tagger was experimented by using BERT, syllable bigram, and subword bigram embedding, and both syllable bigram and subword bigram embedding showed improvement in performance compared to general BERT. Part-of-speech tagging was performed by integrating the Head-Tail tokenization model and the simplified part-of-speech tagging model, achieving 98.99% word unit accuracy and 99.08% token unit accuracy. As a result of the experiment, it was found that the performance of part-of-speech tagging improved when the maximum token length was limited to twice the number of words.