• Title/Summary/Keyword: 도메인 공학

Search Result 467, Processing Time 0.024 seconds

Llama2 Cross-lingual Korean with instruction and translation datasets (지시문 및 번역 데이터셋을 활용한 Llama2 Cross-lingual 한국어 확장)

  • Gyu-sik Jang;;Seung-Hoon Na;Joon-Ho Lim;Tae-Hyeong Kim;Hwi-Jung Ryu;Du-Seong Chang
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.627-632
    • /
    • 2023
  • 대규모 언어 모델은 높은 연산 능력과 방대한 양의 데이터를 기반으로 탁월한 성능을 보이며 자연어처리 분야의 주목을 받고있다. 이러한 모델들은 다양한 언어와 도메인의 텍스트를 처리하는 능력을 갖추게 되었지만, 전체 학습 데이터 중에서 한국어 데이터의 비중은 여전히 미미하다. 결과적으로 이는 대규모 언어 모델이 영어와 같은 주요 언어들에 비해 한국어에 대한 이해와 처리 능력이 상대적으로 부족함을 의미한다. 본 논문은 이러한 문제점을 중심으로, 대규모 언어 모델의 한국어 처리 능력을 향상시키는 방법을 제안한다. 특히, Cross-lingual transfer learning 기법을 활용하여 모델이 다양한 언어에 대한 지식을 한국어로 전이시켜 성능을 향상시키는 방안을 탐구하였다. 이를 통해 모델은 기존의 다양한 언어에 대한 손실을 최소화 하면서도 한국어에 대한 처리 능력을 상당히 향상시켰다. 실험 결과, 해당 기법을 적용한 모델은 기존 모델 대비 nsmc데이터에서 2배 이상의 성능 향상을 보이며, 특히 복잡한 한국어 구조와 문맥 이해에서 큰 발전을 보였다. 이러한 연구는 대규모 언어 모델을 활용한 한국어 적용 향상에 기여할 것으로 기대 된다.

  • PDF

A study on Ontology Modelling for Autonomous Context Decision Logic in Vertical Farming (수직 농업 자율 컨텍스 결정을 위한 온톨로지 모델링에 관한 연구)

  • Young Goun Jin;Won Goo Lee
    • Smart Media Journal
    • /
    • v.13 no.6
    • /
    • pp.72-79
    • /
    • 2024
  • Vertical farming is one of the important solutions to overcome future food and population problems. However, underdeveloped countries can't afford due to high initial investment costs and technical huddles. To solve this problem, it is necessary to formalize the vertical farming area using ontology. In this paper, we present an ontology that includes various cultivation methods of vertical farming, connects sensors and actuates according to the methods, and recognizes and controls the cultivation environment context of the selected vertical fanning, we expect to be able to autonomously make control decisions about the context by analyzing the environmental context that is important for perceived vertical cultivation using the logical reasoning function of ontology.

The Development of Converting Program from Sealed Geological Model to Gmsh, COMSOL for Building Simulation Grid (시뮬레이션 격자구조 제작을 위한 Mesh 기반 지질솔리드모델의 Gmsh, COMSOL 변환 프로그램 개발)

  • Lee, Chang Won;Cho, Seong-Jun
    • Journal of the Korean earth science society
    • /
    • v.38 no.1
    • /
    • pp.80-90
    • /
    • 2017
  • To build tetrahedra mesh for FEM numerical analysis, Boundary Representation (B-Rep) model is required, which provides the efficient volume description of an object. In engineering, the parametric solid modeling method is used for building B-Rep model. However, a geological modeling generally adopts discrete modeling based on the triangulated surface, called a Sealed Geological Model, which defines geological domain by using geological interfaces such as horizons, faults, intrusives and modeling boundaries. Discrete B-Rep model is incompatible with mesh generation softwares in engineering because of discrepancies between discrete and parametric technique. In this research we have developed a converting program from Sealed Geological Model to Gmsh and COMSOL software. The developed program can convert complex geological model built by geomodeling software to user-friendly FEM software and it can be applied to geoscience simulation such as geothermal, mechanical rock simulation etc.

Quickly Map Renewal through IPM-based Image Matching with High-Definition Map (IPM 기반 정밀도로지도 매칭을 통한 지도 신속 갱신 방법)

  • Kim, Duk-Jung;Lee, Won-Jong;Kim, Gi-Chang;Choi, Yun-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_1
    • /
    • pp.1163-1175
    • /
    • 2021
  • In autonomous driving, road markings are an essential element for object tracking, path planning and they are able to provide important information for localization. This paper presents an approach to update and measure road surface markers with HD maps as well as matching using inverse perspective mapping. The IPM removes perspective effects from the vehicle's front camera image and remaps them to the 2D domain to create a bird-view region to fit with HD map regions. In addition, letters and arrows such as stop lines, crosswalks, dotted lines, and straight lines are recognized and compared to objects on the HD map to determine whether they are updated. The localization of a newly installed object can be obtained by referring to the measurement value of the surrounding object on the HD map. Therefore, we are able to obtain high accuracy update results with very low computational costs and low-cost cameras and GNSS/INS sensors alone.

A Study on the Document Topic Extraction System for LDA-based User Sentiment Analysis (LDA 기반 사용자 감정분석을 위한 문서 토픽 추출 시스템에 대한 연구)

  • An, Yoon-Bin;Kim, Hak-Young;Moon, Yong-Hyun;Hwang, Seung-Yeon;Kim, Jeong-Joon
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.2
    • /
    • pp.195-203
    • /
    • 2021
  • Recently, big data, a major technology in the IT field, has been expanding into various industrial sectors and research on how to utilize it is actively underway. In most Internet industries, user reviews help users make decisions about purchasing products. However, the process of screening positive, negative and helpful reviews from vast product reviews requires a lot of time in determining product purchases. Therefore, this paper designs and implements a system that analyzes and aggregates keywords using LDA, a big data analysis technology, to provide meaningful information to users. For the extraction of document topics, in this study, the domestic book industry is crawling data into domains, and big data analysis is conducted. This helps buyers by providing comprehensive information on products based on user review topics and appraisal words, and furthermore, the product's outlook can be identified through the review status analysis.

Characterization of Heat Shock Protein 70 in Freshwater Snail, Semisulcospira coreana in Response to Temperature and Salinity (담수산다슬기, Semisulcospira coreana의 열충격단백질 유전자 특성 및 발현분석)

  • Park, Seung Rae;Choi, Young Kwang;Lee, Hwa Jin;Lee, Sang Yoon;Kim, Yi Kyung
    • Journal of Marine Life Science
    • /
    • v.5 no.1
    • /
    • pp.17-24
    • /
    • 2020
  • We have identified a heat shock protein 70 gene from freshwater snail, Semisulcospira coreana. The freshwater snail HSP70 gene encode a polypeptide of 639 amino acids. Based on bioinformatic sequence characterization, HSP70 gene possessed three classical signature motifs and other conserved residues essential for their functionality. The phylogenetic analysis showed that S. coreana HSP70 had closet relationship with that of golden apple snails, Pomacea canaliculata. The HSP70 mRNA level was significantly up-regulated in response to thermal and salinity challenges. These results are in agreement with the results of other species, indicating that S. coreana HSP70 used be a potential molecular marker in response to external stressors and the regulatory process related to the HSP70 transcriptional response can be highly conserved among species.

The Method for Colorizing SAR Images of Kompsat-5 Using Cycle GAN with Multi-scale Discriminators (다양한 크기의 식별자를 적용한 Cycle GAN을 이용한 다목적실용위성 5호 SAR 영상 색상 구현 방법)

  • Ku, Wonhoe;Chun, Daewon
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1415-1425
    • /
    • 2018
  • Kompsat-5 is the first Earth Observation Satellite which is equipped with an SAR in Korea. SAR images are generated by receiving signals reflected from an object by microwaves emitted from a SAR antenna. Because the wavelengths of microwaves are longer than the size of particles in the atmosphere, it can penetrate clouds and fog, and high-resolution images can be obtained without distinction between day and night. However, there is no color information in SAR images. To overcome these limitations of SAR images, colorization of SAR images using Cycle GAN, a deep learning model developed for domain translation, was conducted. Training of Cycle GAN is unstable due to the unsupervised learning based on unpaired dataset. Therefore, we proposed MS Cycle GAN applying multi-scale discriminator to solve the training instability of Cycle GAN and to improve the performance of colorization in this paper. To compare colorization performance of MS Cycle GAN and Cycle GAN, generated images by both models were compared qualitatively and quantitatively. Training Cycle GAN with multi-scale discriminator shows the losses of generators and discriminators are significantly reduced compared to the conventional Cycle GAN, and we identified that generated images by MS Cycle GAN are well-matched with the characteristics of regions such as leaves, rivers, and land.

Component Grid: A Developer-centric Environment for Defense Software Reuse (컴포넌트 그리드: 개발자 친화적인 국방 소프트웨어 재사용 지원 환경)

  • Ko, In-Young;Koo, Hyung-Min
    • Journal of Software Engineering Society
    • /
    • v.23 no.4
    • /
    • pp.151-163
    • /
    • 2010
  • In the defense software domain where large-scale software products in various application areas need to be built, reusing software is regarded as one of the important practices to build software products efficiently and economically. There have been many efforts to apply various methods to support software reuse in the defense software domain. However, developers in the defense software domain still experience many difficulties and face obstacles in reusing software assets. In this paper, we analyze practical problems of software reuse in the defense software domain, and define core requirements to solve those problems. To meet these requirements, we are currently developing the Component Grid system, a reuse-support system that provides a developer-centric software reuse environment. We have designed an architecture of Component Grid, and defined essential elements of the architecture. We have also developed the core approaches for developing the Component Grid system: a semantic-tagging-based requirement tracing method, a reuse-knowledge representation model, a social-network-based asset search method, a web-based asset management environment, and a wiki-based collaborative and participative knowledge construction and refinement method. We expect that the Component Grid system will contribute to increase the reusability of software assets in the defense software domain by providing the environment that supports transparent and efficient sharing and reuse of software assets.

  • PDF

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

Disaster Assessment, Monitoring, and Prediction Using Remote Sensing and GIS (원격탐사를 이용한 재난 감시 및 예측과 GIS 분석)

  • Jung, Minyoung;Kim, Duk-jin;Sohn, Hong-Gyoo;Choi, Jinmu;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.5_3
    • /
    • pp.1341-1347
    • /
    • 2021
  • The need for an effective disaster management system has grown these days to protect public safety as the number of disasters causing massive damage increases. Since disaster-induced damage can develop in various ways, rapid and accurate countermeasures must be prepared soon after disasters occur. Numerous studies have continuously developed remote sensing and GIS (Geographic Information System)-based techniques for disaster monitoring and damage analysis. This special issue presents the research results on disaster prediction and monitoring based on various remote sensors on different platforms from ground to space and disaster management using GIS techniques. The developed techniques help manage various disasters such as storms, floods, and forest fires and can be combined to achieve an integrated and effective disaster management system.