• Title/Summary/Keyword: artificial intelligence-based model

Search Result 1,215, Processing Time 0.028 seconds

A Study on Korean Poetry Generation System Based on Artificial Intelligence (인공지능 기반 한국어 시 생성 시스템 개발 연구)

  • Myung-sun Kim;Woo-Hyuk Jung;Jihwan Woo
    • Information Systems Review
    • /
    • v.25 no.3
    • /
    • pp.43-57
    • /
    • 2023
  • In this study, we developed an AI-based system to generate sentences that assist in creating Korean poetry. Instead of replacing the creative aspect of composition, which is considered a unique domain of humans, the focus was on generating foundational sentences to enhance human imagination efficiently. By conducting interviews with poets, the researchers extracted sentences from eight distinct datasets, enabling the generation of poetry across eight different genres. This study stands out for its innovation in developing a method for crafting literary works in Korean. Its significance lies in its potential to facilitate the creation of diverse literary forms such as essays, prose, or novels.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Toward a Possibility of the Unified Model of Cognition (통합적 인지 모형의 가능성)

  • Rhee Young-Eui
    • Journal of Science and Technology Studies
    • /
    • v.1 no.2 s.2
    • /
    • pp.399-422
    • /
    • 2001
  • Models for human cognition currently discussed in cognitive science cannot be appropriate ones. The symbolic model of the traditional artificial intelligence works for reasoning and problem-solving tasks, but doesn't fit for pattern recognition such as letter/sound cognition. Connectionism shows the contrary phenomena to those of the traditional artificial intelligence. Connectionist systems has been shown to be very strong in the tasks of pattern recognition but weak in most of logical tasks. Brooks' situated action theory denies the. notion of representation which is presupposed in both the traditional artificial intelligence and connectionism and suggests a subsumption model which is based on perceptions coming from real world. However, situated action theory hasn't also been well applied to human cognition so far. In emphasizing those characteristics of models I refer those models 'left-brain model', 'right-brain model', and 'robot model' respectively. After I examine those models in terms of substantial items of cognitions- mental state, mental procedure, basic element of cognition, rule of cognition, appropriate level of analysis, architecture of cognition, I draw three arguments of embodiment. I suggest a way of unifying those existing models by examining their theoretical compatability which is found in those arguments.

  • PDF

Predicting the buckling load of smart multilayer columns using soft computing tools

  • Shahbazi, Yaser;Delavari, Ehsan;Chenaghlou, Mohammad Reza
    • Smart Structures and Systems
    • /
    • v.13 no.1
    • /
    • pp.81-98
    • /
    • 2014
  • This paper presents the elastic buckling of smart lightweight column structures integrated with a pair of surface piezoelectric layers using artificial intelligence. The finite element modeling of Smart lightweight columns is found using $ANSYS^{(R)}$ software. Then, the first buckling load of the structure is calculated using eigenvalue buckling analysis. To determine the accuracy of the present finite element analysis, a compression study is carried out with literature. Later, parametric studies for length variations, width, and thickness of the elastic core and of the piezoelectric outer layers are performed and the associated buckling load data sets for artificial intelligence are gathered. Finally, the application of soft computing-based methods including artificial neural network (ANN), fuzzy inference system (FIS), and adaptive neuro fuzzy inference system (ANFIS) were carried out. A comparative study is then made between the mentioned soft computing methods and the performance of the models is evaluated using statistic measurements. The comparison of the results reveal that, the ANFIS model with Gaussian membership function provides high accuracy on the prediction of the buckling load in smart lightweight columns, providing better predictions compared to other methods. However, the results obtained from the ANN model using the feed-forward algorithm are also accurate and reliable.

Multi-communication layered HPL model and its application to GPU clusters

  • Kim, Young Woo;Oh, Myeong-Hoon;Park, Chan Yeol
    • ETRI Journal
    • /
    • v.43 no.3
    • /
    • pp.524-537
    • /
    • 2021
  • High-performance Linpack (HPL) is among the most popular benchmarks for evaluating the capabilities of computing systems and has been used as a standard to compare the performance of computing systems since the early 1980s. In the initial system-design stage, it is critical to estimate the capabilities of a system quickly and accurately. However, the original HPL mathematical model based on a single core and single communication layer yields varying accuracy for modern processors and accelerators comprising large numbers of cores. To reduce the performance-estimation gap between the HPL model and an actual system, we propose a mathematical model for multi-communication layered HPL. The effectiveness of the proposed model is evaluated by applying it to a GPU cluster and well-known systems. The results reveal performance differences of 1.1% on a single GPU. The GPU cluster and well-known large system show 5.5% and 4.1% differences on average, respectively. Compared to the original HPL model, the proposed multi-communication layered HPL model provides performance estimates within a few seconds and a smaller error range from the processor/accelerator level to the large system level.

A System Engineering Approach to Predict the Critical Heat Flux Using Artificial Neural Network (ANN)

  • Wazif, Muhammad;Diab, Aya
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.16 no.2
    • /
    • pp.38-46
    • /
    • 2020
  • The accurate measurement of critical heat flux (CHF) in flow boiling is important for the safety requirement of the nuclear power plant to prevent sharp degradation of the convective heat transfer between the surface of the fuel rod cladding and the reactor coolant. In this paper, a System Engineering approach is used to develop a model that predicts the CHF using machine learning. The model is built using artificial neural network (ANN). The model is then trained, tested and validated using pre-existing database for different flow conditions. The Talos library is used to tune the model by optimizing the hyper parameters and selecting the best network architecture. Once developed, the ANN model can predict the CHF based solely on a set of input parameters (pressure, mass flux, quality and hydraulic diameter) without resorting to any physics-based model. It is intended to use the developed model to predict the DNBR under a large break loss of coolant accident (LBLOCA) in APR1400. The System Engineering approach proved very helpful in facilitating the planning and management of the current work both efficiently and effectively.

ConvXGB: A new deep learning model for classification problems based on CNN and XGBoost

  • Thongsuwan, Setthanun;Jaiyen, Saichon;Padcharoen, Anantachai;Agarwal, Praveen
    • Nuclear Engineering and Technology
    • /
    • v.53 no.2
    • /
    • pp.522-531
    • /
    • 2021
  • We describe a new deep learning model - Convolutional eXtreme Gradient Boosting (ConvXGB) for classification problems based on convolutional neural nets and Chen et al.'s XGBoost. As well as image data, ConvXGB also supports the general classification problems, with a data preprocessing module. ConvXGB consists of several stacked convolutional layers to learn the features of the input and is able to learn features automatically, followed by XGBoost in the last layer for predicting the class labels. The ConvXGB model is simplified by reducing the number of parameters under appropriate conditions, since it is not necessary re-adjust the weight values in a back propagation cycle. Experiments on several data sets from UCL Repository, including images and general data sets, showed that our model handled the classification problems, for all the tested data sets, slightly better than CNN and XGBoost alone and was sometimes significantly better.

Research on the evaluation model for the impact of AI services

  • Soonduck Yoo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.3
    • /
    • pp.191-202
    • /
    • 2023
  • This study aims to propose a framework for evaluating the impact of artificial intelligence (AI) services, based on the concept of AI service impact. It also suggests a model for evaluating this impact and identifies relevant factors and measurement approaches for each item of the model. The study classifies the impact of AI services into five categories: ethics, safety and reliability, compliance, user rights, and environmental friendliness. It discusses these five categories from a broad perspective and provides 21 detailed factors for evaluating each category. In terms of ethics, the study introduces three additional factors-accessibility, openness, and fairness-to the ten items initially developed by KISDI. In the safety and reliability category, the study excludes factors such as dependability, policy, compliance, and awareness improvement as they can be better addressed from a technical perspective. The compliance category includes factors such as human rights protection, privacy protection, non-infringement, publicness, accountability, safety, transparency, policy compliance, and explainability.For the user rights category, the study excludes factors such as publicness, data management, policy compliance, awareness improvement, recoverability, openness, and accuracy. The environmental friendliness category encompasses diversity, publicness, dependability, transparency, awareness improvement, recoverability, and openness.This study lays the foundation for further related research and contributes to the establishment of relevant policies by establishing a model for evaluating the impact of AI services. Future research is required to assess the validity of the developed indicators and provide specific evaluation items for practical use, based on expert evaluations.

A Study on Artificial Intelligence Models for Predicting the Causes of Chemical Accidents Using Chemical Accident Status and Case Data (화학물질 사고 현황 및 사례 데이터를 이용한 인공지능 사고 원인 예측 모델에 관한 연구)

  • KyungHyun Lee;RackJune Baek;Hyeseong Jung;WooSu Kim;HeeJeong Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.5
    • /
    • pp.725-733
    • /
    • 2024
  • This study aims to develop an artificial intelligence-based model for predicting the causes of chemical accidents, utilizing data on 865 chemical accident situations and cases provided by the Chemical Safety Agency under the Ministry of Environment from January 2014 to January 2024. The research involved training the data using six artificial intelligence models and compared evaluation metrics such as accuracy, precision, recall, and F1 score. Based on 356 chemical accident cases from 2020 to 2024, additional training data sets were applied using chemical accident cause investigations and similar accident prevention measures suggested by the Chemical Safety Agency from 2021 to 2022. Through this process, the Multi-Layer Perceptron (MLP) model showed an accuracy of 0.6590 and a precision of 0.6821. the Multi-Layer Perceptron (MLP) model showed an accuracy of 0.6590 and a precision of 0.6821. The Logistic Regression model improved its accuracy from 0.6647 to 0.7778 and its precision from 0.6790 to 0.7992, confirming that the Logistic Regression model is the most effective for predicting the causes of chemical accidents.

Efficient 3D Scene Labeling using Object Detectors & Location Prior Maps (물체 탐지기와 위치 사전 확률 지도를 이용한 효율적인 3차원 장면 레이블링)

  • Kim, Joo-Hee;Kim, In-Cheol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.11
    • /
    • pp.996-1002
    • /
    • 2015
  • In this paper, we present an effective system for the 3D scene labeling of objects from RGB-D videos. Our system uses a Markov Random Field (MRF) over a voxel representation of the 3D scene. In order to estimate the correct label of each voxel, the probabilistic graphical model integrates both scores from sliding window-based object detectors and also from object location prior maps. Both the object detectors and the location prior maps are pre-trained from manually labeled RGB-D images. Additionally, the model integrates the scores from considering the geometric constraints between adjacent voxels in the label estimation. We show excellent experimental results for the RGB-D Scenes Dataset built by the University of Washington, in which each indoor scene contains tabletop objects.