• Title/Summary/Keyword: Ai

Search Result 7,513, Processing Time 0.035 seconds

A Comparative study on the Effectiveness of Segmentation Strategies for Korean Word and Sentence Classification tasks (한국어 단어 및 문장 분류 태스크를 위한 분절 전략의 효과성 연구)

  • Kim, Jin-Sung;Kim, Gyeong-min;Son, Jun-young;Park, Jeongbae;Lim, Heui-seok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.39-47
    • /
    • 2021
  • The construction of high-quality input features through effective segmentation is essential for increasing the sentence comprehension of a language model. Improving the quality of them directly affects the performance of the downstream task. This paper comparatively studies the segmentation that effectively reflects the linguistic characteristics of Korean regarding word and sentence classification. The segmentation types are defined in four categories: eojeol, morpheme, syllable and subchar, and pre-training is carried out using the RoBERTa model structure. By dividing tasks into a sentence group and a word group, we analyze the tendency within a group and the difference between the groups. By the model with subchar-level segmentation showing higher performance than other strategies by maximal NSMC: +0.62%, KorNLI: +2.38%, KorSTS: +2.41% in sentence classification, and the model with syllable-level showing higher performance at maximum NER: +0.7%, SRL: +0.61% in word classification, the experimental results confirm the effectiveness of those schemes.

The Improvement Plan for Indicator System of Personal Information Management Level Diagnosis in the Era of the 4th Industrial Revolution: Focusing on Application of Personal Information Protection Standards linked to specific IT technologies (제4차 산업시대의 개인정보 관리수준 진단지표체계 개선방안: 특정 IT기술연계 개인정보보호기준 적용을 중심으로)

  • Shin, Young-Jin
    • Journal of Convergence for Information Technology
    • /
    • v.11 no.12
    • /
    • pp.1-13
    • /
    • 2021
  • This study tried to suggest ways to improve the indicator system to strengthen the personal information protection. For this purpose, the components of indicator system are derived through domestic and foreign literature, and it was selected as main the diagnostic indicators through FGI/Delphi analysis for personal information protection experts and a survey for personal information protection officers of public institutions. As like this, this study was intended to derive an inspection standard that can be reflected as a separate index system for personal information protection, by classifying the specific IT technologies of the 4th industrial revolution, such as big data, cloud, Internet of Things, and artificial intelligence. As a result, from the planning and design stage of specific technologies, the check items for applying the PbD principle, pseudonymous information processing and de-identification measures were selected as 2 common indicators. And the checklists were consisted 2 items related Big data, 5 items related Cloud service, 5 items related IoT, and 4 items related AI. Accordingly, this study expects to be an institutional device to respond to new technological changes for the continuous development of the personal information management level diagnosis system in the future.

Analysis of the relationship between service robot and non-face-to-face

  • Hwang, Eui-Chul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.247-254
    • /
    • 2021
  • As COVID-19 spread, non-face-to-face activities were required, and the use of service robots is gradually increasing. This paper analyzed the relationship between the increasing trend of service robots before and after COVID-19 through keyword search containing the keyword 'service robot AND non-face-to-face' over the past three years (2018.10-20219) using BigKines, a news big data analysis system. As a result, there were 0 cases in the first period (2018.10~2019.9), 52 cases in the second period (2019.10~2020.9) and 112 cases in the third period (2020.10~2021.9), an increase of 115% compared to the second period. The keywords commonly mentioned in the analysis of related words in the second and third periods were COVID-19, AI, the Ministry of Trade, Industry, and Energy, and LG Electronics, and the weight of COVID-19 was the largest, confirming that the analysis keyword. Due to the spread of Corona 19, non-face-to-face is required, and with the development of information and communication technology, the field of application of service robots is rapidly increasing. Accordingly, for the commercialization of service robots that will lead the non-face-to-face economy, there is an urgent need to nurture human resources that require standardization and expertise in safety and performance fields.

Design of an Integrated University Information Service Model Based on Block Chain (블록체인 기반의 대학 통합 정보서비스 실증 모델 설계)

  • Moon, Sang Guk;Kim, Min Sun;Kim, Hyun Joo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.2
    • /
    • pp.43-50
    • /
    • 2019
  • Block-chain enjoys technical advantages such as "robust security," owing to the structural characteristic that forgery is impossible, decentralization through sharing the ledger between participants, and the hyper-connectivity connecting Internet of Things, robots, and Artificial Intelligence. As a result, public organizations have highly positive attitudes toward the adoption of technology using block-chain, and the design of university information services is no exception. Universities are also considering the application of block-chain technology to foundations that implement various information services within a university. Through case studies of block-chain applications across various industries, this study designs an empirical model of an integrated information service platform that integrates information systems in a university. A basic road map of university information services is constructed based on block-chain technology, from planning to the actual service design stage. Furthermore, an actual empirical model of an integrated information service in a university is designed based on block-chain by applying this framework.

Detecting and Avoiding Dangerous Area for UAVs Using Public Big Data (공공 빅데이터를 이용한 UAV 위험구역검출 및 회피방법)

  • Park, Kyung Seok;Kim, Min Jun;Kim, Sung Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.6
    • /
    • pp.243-250
    • /
    • 2019
  • Because of a moving UAV has a lot of potential/kinetic energy, if the UAV falls to the ground, it may have a lot of impact. Because this can lead to human casualities, in this paper, the population density area on the UAV flight path is defined as a dangerous area. The conventional UAV path flight was a passive form in which a UAV moved in accordance with a path preset by a user before the flight. Some UAVs include safety features such as a obstacle avoidance system during flight. Still, it is difficult to respond to changes in the real-time flight environment. Using public Big Data for UAV path flight can improve response to real-time flight environment changes by enabling detection of dangerous areas and avoidance of the areas. Therefore, in this paper, we propose a method to detect and avoid dangerous areas for UAVs by utilizing the Big Data collected in real-time. If the routh is designated according to the destination by the proposed method, the dangerous area is determined in real-time and the flight is made to the optimal bypass path. In further research, we will study ways to increase the quality satisfaction of the images acquired by flying under the avoidance flight plan.

An Analysis on Determinants of the Capesize Freight Rate and Forecasting Models (케이프선 시장 운임의 결정요인 및 운임예측 모형 분석)

  • Lim, Sang-Seop;Yun, Hee-Sung
    • Journal of Navigation and Port Research
    • /
    • v.42 no.6
    • /
    • pp.539-545
    • /
    • 2018
  • In recent years, research on shipping market forecasting with the employment of non-linear AI models has attracted significant interest. In previous studies, input variables were selected with reference to past papers or by relying on the intuitions of the researchers. This paper attempts to address this issue by applying the stepwise regression model and the random forest model to the Cape-size bulk carrier market. The Cape market was selected due to the simplicity of its supply and demand structure. The preliminary selection of the determinants resulted in 16 variables. In the next stage, 8 features from the stepwise regression model and 10 features from the random forest model were screened as important determinants. The chosen variables were used to test both models. Based on the analysis of the models, it was observed that the random forest model outperforms the stepwise regression model. This research is significant because it provides a scientific basis which can be used to find the determinants in shipping market forecasting, and utilize a machine-learning model in the process. The results of this research can be used to enhance the decisions of chartering desks by offering a guideline for market analysis.

Development of a Dynamic Downscaling Method using a General Circulation Model (CCSM3) of the Regional Climate Model (MM5) (전지구 모델(CCSM3)을 이용한 지역기후 모델(MM5)의 역학적 상세화 기법 개발)

  • Choi, Jin-Young;Song, Chang-Geun;Lee, Jae-Bum;Hong, Sung-Chul;Bang, Cheol-Han
    • Journal of Climate Change Research
    • /
    • v.2 no.2
    • /
    • pp.79-91
    • /
    • 2011
  • In order to study interactions between climate change and air quality, a modeling system including the downscaling scheme has been developed in the integrated manner. This research focuses on the development of a downscaling method to utilize CCSM3 outputs as the initial and boundary conditions for the regional climate model, MM5. Horizontal/vertical interpolation was performed to convert from the latitude/longitude and hybrid-vertical coordinate for the CCSM3 model to the Lambert-Conformal Arakawa-B and sigma-vertical coordinate for the MM5 model. A variable diagnosis was made to link between different variables and their units of CCSM and MM5. To evaluate the dynamic downscaling performance of this study, spatial distributions were compared between outputs of CCSM/MM5 and NRA/MM5 and statistic analysis was conducted. Temperature and precipitation patterns of CCSM/MM5 in summer and winter showed a similar pattern with those of observation data in East Asia and the Korean Peninsula. In addition, statistical analysis presented that the agreement index (AI) is more than 0.9 and correlation coefficient about 0.9. Those results indicate that the dynamic downscaling system built in this study can be used for the research of interaction between climate change and air quality.

Research Analysis in Automatic Fake News Detection (자동화기반의 가짜 뉴스 탐지를 위한 연구 분석)

  • Jwa, Hee-Jung;Oh, Dong-Suk;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.7
    • /
    • pp.15-21
    • /
    • 2019
  • Research in detecting fake information gained a lot of interest after the US presidential election in 2016. Information from unknown sources are produced in the shape of news, and its rapid spread is fueled by the interest of public drawn to stimulating and interesting issues. In addition, the wide use of mass communication platforms such as social network services makes this phenomenon worse. Poynter Institute created the International Fact Checking Network (IFCN) to provide guidelines for judging the facts of skilled professionals and releasing "Code of Ethics" for fact check agencies. However, this type of approach is costly because of the large number of experts required to test authenticity of each article. Therefore, research in automated fake news detection technology that can efficiently identify it is gaining more attention. In this paper, we investigate fake news detection systems and researches that are rapidly developing, mainly thanks to recent advances in deep learning technology. In addition, we also organize shared tasks and training corpus that are released in various forms, so that researchers can easily participate in this field, which deserves a lot of research effort.

Development of T2DM Prediction Model Using RNN (RNN을 이용한 제2형 당뇨병 예측모델 개발)

  • Jang, Jin-Su;Lee, Min-Jun;Lee, Tae-Ro
    • Journal of Digital Convergence
    • /
    • v.17 no.8
    • /
    • pp.249-255
    • /
    • 2019
  • Type 2 diabetes mellitus(T2DM) is included in metabolic disorders characterized by hyperglycemia, which causes many complications, and requires long-term treatment resulting in massive medical expenses each year. There have been many studies to solve this problem, but the existing studies have not been accurate by learning and predicting the data at specific time point. Thus, this study proposed a model using RNN to increase the accuracy of prediction of T2DM. This work propose a T2DM prediction model based on Korean Genome and Epidemiology study(Ansan, Anseong Korea). We trained all of the data over time to create prediction model of diabetes. To verify the results of the prediction model, we compared the accuracy with the existing machine learning methods, LR, k-NN, and SVM. Proposed prediction model accuracy was 0.92 and the AUC was 0.92, which were higher than the other. Therefore predicting the onset of T2DM by using the proposed diabetes prediction model in this study, it could lead to healthier lifestyle and hyperglycemic control resulting in lower risk of diabetes by alerted diabetes occurrence.

A study on the Investigation and Removal the Cause of Blacken Effect of Waterlogged archaeological woods (수침고목재의 흑화 원인과 제거방법에 관하여)

  • Yang, Seok-jin
    • Korean Journal of Heritage: History & Science
    • /
    • v.40
    • /
    • pp.413-430
    • /
    • 2007
  • This study analyzed the foreign substances in waterlogged archaeological woods and compounds in soil where waterlogged archaeological wood was buried, in order to examine the relationship between burial environment and foreign substances in waterlogged archaeological wood. The XRF(X-ray Fluorescence Spectroscopy) and EDX(Energy Dispersive X-ray) analysis were conducted to examine the effect of iron(Fe) to blacken the waterlogged wood. The XRF results showed that investigated soil contained Si, Al, and Fe. Wood ash contained more sulfur and Fe than any other elements in the EDX analysis. Cellulose and hemicellulose were significantly reduced at the surface of wood, which is the blackened part of waterlogged wood. Foreign substances changed the surface color. These problems could be solved by removal of foreign substances in waterlogged archaeological wood using EDTA(Ethylene Diamine Tetra Acetic acid). The optimum condition to remove Fe from waterlogged wood by EDTA was investigated. To do this, the concentration of Fe removed was measured with various concentration of EDTA-2Na. The optimum pH of EDTA-2Na was figured to be 4.1 to 4.3. As the concentration of EDTA increased, the extracted concentration of Fe also increased. In the case of 0.4 wt% of EDTA-2Na, about 60ppm of Fe was eliminated and was stabilized after 48 hours. In the case of EDTA-3Na, the optimum pH was 7 to 8, and about 10 ppm of Fe was eliminated at 0.4 wt% of EDTA-3Na. In the case of EDTA-4Na, the optimum pH was 10 to 11, and about 20 ppm of Fe was eliminated at 0.4 wt% of EDTA-4Na. In conclusion, the iron(Fe) in waterlogged archaeological wood was removed by EDTA treatment and it increased the whiteness of the surface.