• Title/Summary/Keyword: Rule Base

Search Result 632, Processing Time 0.032 seconds

Effects of microstructure and welding heat input on the toughness of weldable high strength steel weldments (용접구조용 고장력강의 용접부 인성에 미치는 미세 조직과 용접 입열량의 영향)

  • 장웅성;방국수;엄기원
    • Journal of Welding and Joining
    • /
    • v.7 no.3
    • /
    • pp.44-54
    • /
    • 1989
  • This study was undertaken to evaluate the allowable welding heat input range for high strength steels manufactured by various processes and to compare the weldability of TMCP steel for high heat input welding with that of conventional Ti-added normalized steel. The allowable welding heat input ranges for conventional 50kg/$mm^2$ steel to guarantee D or E grade of ship structural steel were below 150 and 80kJ/cm respectively. Such a limit in welding heat input was closely related with the formation of undesirable microstructures, such as grain boundary ferrite and ferrite side plate in the coarse grain HAZ. In case of 60 and 80kg/$mm^2$ quenched and tempered steels, for securing toughness in weldments over toughness requirements for base metal, each welding heat input had to be restricted below 60 and 40kJ/cm, that was mainly due to coarsened polygonal ferrite in weld metal and lower temperature transformation products in coarse grain HAZ. The TMCP steel could be appropriate as a grade E ship hull steel up to 200kJ/cm, but the Ti-added normalized steel could be applied only below 130kJ/cm under the same rule. This difference was partly owing to whether uniform and fine intragranular ferrite microstructure was well developed in HAZ or not.

  • PDF

Development of Expert System for Maintenance of Tunnel (II) (터널의 유지관리를 위한 전문가시스템 개발(II) : 수치해석을 통한 지식베이스 확장)

  • Kim, Do-Houn;Huh, Taik-Nyung;Lim, Yun-Mook
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.4 no.2
    • /
    • pp.185-191
    • /
    • 2000
  • The safety problem of aged tunnels has been emphasized. For the effective maintenance, site inspection of tunnel structures and surrounding grounds are required periodically. Also, the determination of safety of tunnels is not a simple problem. So the role experienced engineer in the maintenance is very important and development of an expert system that can perform as the engineers, has been needed. In this study, from the results of numerical analysis in several case, new precision inspection rules which can substitute actual numerical analysis are determined by a commercial program FLAC and regression analysis under various parameters such as material property, lining thickness, overburden and laterial coefficients. They are added to the knowledge base to determine safety of tunnel lining. To verify the expert system, the results are compared with an existing tunnel diagnosis report. It can be concluded that the new rule are well represented the actual numerical analysis under various site conditions. Therefore it is expected that the systematic management for effective maintenance of tunnel structure will be possible.

  • PDF

Purification, crystallization, and preliminary X-ray diffraction data analysis for PB1 dimer of P62/SQSTM1

  • Shin, Ho-Chul;Lim, Dahwan;Ku, Bonsu;Kim, Seung Jun
    • Biodesign
    • /
    • v.6 no.4
    • /
    • pp.100-102
    • /
    • 2018
  • Autophagy is a degradation pathway that targets many cellular components and plays a particularly important role in protein degradation and recycling. This process is very complex and several proteins participate in this process. One of them, P62/SQSTM1, is related to the N-end rule and induces protein degradation through autophagy. The P62/SQSTM1 makes a huge oligomer, and this oligomerization is known to play an important role in its mechanism. This oligomerization takes two steps. First, the PB1 domain of P62/SQSTM1 makes the base oligomer, and then, when the ligand binds to the ZZ domain of P62/SQSTM1, it induces a higher oligomer by the disulfide bond of the two cysteines. To understand the oligomerization mechanism of P62/SQSTM1, we need to know the dimerization of the PB1 domain. In this study, crystals of PB1 dimer were made and the crystals were diffracted by X-ray to collect usable data up to 3.2A. We are analyzing the structure using the molecular replacement (MR) method.

A Study on the Psychological Counseling AI Chatbot System based on Sentiment Analysis (감정분석 기반 심리상담 AI 챗봇 시스템에 대한 연구)

  • An, Se Hun;Jeong, Ok Ran
    • Journal of Information Technology Services
    • /
    • v.20 no.3
    • /
    • pp.75-86
    • /
    • 2021
  • As artificial intelligence is actively studied, chatbot systems are being applied to various fields. In particular, many chatbot systems for psychological counseling have been studied that can comfort modern people. However, while most psychological counseling chatbots are studied as rule-base and deep learning-based chatbots, there are large limitations for each chatbot. To overcome the limitations of psychological counseling using such chatbots, we proposes a novel psychological counseling AI chatbot system. The proposed system consists of a GPT-2 model that generates output sentence for Korean input sentences and an Electra model that serves as sentiment analysis and anxiety cause classification, which can be provided with psychological tests and collective intelligence functions. At the same time as deep learning-based chatbots and conversations take place, sentiment analysis of input sentences simultaneously recognizes user's emotions and presents psychological tests and collective intelligence solutions to solve the limitations of psychological counseling that can only be done with chatbots. Since the role of sentiment analysis and anxiety cause classification, which are the links of each function, is important for the progression of the proposed system, we experiment the performance of those parts. We verify the novelty and accuracy of the proposed system. It also shows that the AI chatbot system can perform counseling excellently.

Development of Tourism Information Named Entity Recognition Datasets for the Fine-tune KoBERT-CRF Model

  • Jwa, Myeong-Cheol;Jwa, Jeong-Woo
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.14 no.2
    • /
    • pp.55-62
    • /
    • 2022
  • A smart tourism chatbot is needed as a user interface to efficiently provide smart tourism services such as recommended travel products, tourist information, my travel itinerary, and tour guide service to tourists. We have been developed a smart tourism app and a smart tourism information system that provide smart tourism services to tourists. We also developed a smart tourism chatbot service consisting of khaiii morpheme analyzer, rule-based intention classification, and tourism information knowledge base using Neo4j graph database. In this paper, we develop the Korean and English smart tourism Name Entity (NE) datasets required for the development of the NER model using the pre-trained language models (PLMs) for the smart tourism chatbot system. We create the tourism information NER datasets by collecting source data through smart tourism app, visitJeju web of Jeju Tourism Organization (JTO), and web search, and preprocessing it using Korean and English tourism information Name Entity dictionaries. We perform training on the KoBERT-CRF NER model using the developed Korean and English tourism information NER datasets. The weight-averaged precision, recall, and f1 scores are 0.94, 0.92 and 0.94 on Korean and English tourism information NER datasets.

A Study on the Development of Computer-Aided Automatic Design System for Gears (기어의 자동설계 시스템 개발에 관한 연구)

  • Cho, Hae Yong;Kim, Sung Chung;Choi, Jong Ung;Song, Joong Chun
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.13 no.5
    • /
    • pp.95-103
    • /
    • 1996
  • This paper describes a computer aided design system for spur and helical gears. To establish the appropriate program, an integrate approach based on a rule-base system was adopted. This system is implemented on the personal computer and its environment is a commercial CAD package called AutoCAD. This system includes a main program and five sub-modules such as data input module, tooth profile drawing module, strength calculation module, and drawing edit module. In the main program, all the sub-modules are loaded and the type of gear and tooth profile are selected. In the data input module, the variables which are necessary to the design of gear are selected from the database. In the drawing module, from the calculated results, the required gear tooth is produced on the screen. The developed system that aids gear designer provides powerful capabilities for gear design.

  • PDF

Development of the Knowledge-based Systems for Anti-money Laundering in the Korea Financial Intelligence Unit (자금세탁방지를 위한 지식기반시스템의 구축 : 금융정보분석원 사례)

  • Shin, Kyung-Shik;Kim, Hyun-Jung;Kim, Hyo-Sin
    • Journal of Intelligence and Information Systems
    • /
    • v.14 no.2
    • /
    • pp.179-192
    • /
    • 2008
  • This case study shows constructing the knowledge-based system using a rule-based approach for detecting illegal transactions regarding money laundering in the Korea Financial Intelligence Unit (KoFIU). To better manage the explosive increment of low risk suspicious transactions reporting from financial institutions, the adoption of a knowledge-based system in the KoFIU is essential. Also since different types of information from various organizations are converged into the KoFIU, constructing a knowledge-based system for practical use and data management regarding money laundering is definitely required. The success of the financial information system largely depends on how well we can build the knowledge-base for the context. Therefore we designed and constructed the knowledge-based system for anti-money laundering by committing domain experts of each specific financial industry co-worked with a knowledge engineer. The outcome of the knowledge base implementation, measured by the empirical ratio of Suspicious Transaction Reports (STRs) reported to law enforcements, shows that the knowledge-based system is filtering STRs in the primary analysis step efficiently, and so has made great contribution to improve efficiency and effectiveness of the analysis process. It can be said that establishing the foundation of the knowledge base under the entire framework of the knowledge-based system for consideration of knowledge creation and management is indeed valuable.

  • PDF

A Hybrid Knowledge Representation Method for Pedagogical Content Knowledge (교수내용지식을 위한 하이브리드 지식 표현 기법)

  • Kim, Yong-Beom;Oh, Pill-Wo;Kim, Yung-Sik
    • Korean Journal of Cognitive Science
    • /
    • v.16 no.4
    • /
    • pp.369-386
    • /
    • 2005
  • Although Intelligent Tutoring System(ITS) offers individualized learning environment that overcome limited function of existent CAI, and consider many learners' variable, there is little development to be using at the sites of schools because of inefficiency of investment and absence of pedagogical content knowledge representation techniques. To solve these problem, we should study a method, which represents knowledge for ITS, and which reuses knowledge base. On the pedagogical content knowledge, the knowledge in education differs from knowledge in a general sense. In this paper, we shall primarily address the multi-complex structure of knowledge and explanation of learning vein using multi-complex structure. Multi-Complex, which is organized into nodes, clusters and uses by knowledge base. In addition, it grows a adaptive knowledge base by self-learning. Therefore, in this paper, we propose the 'Extended Neural Logic Network(X-Neuronet)', which is based on Neural Logic Network with logical inference and topological inflexibility in cognition structure, and includes pedagogical content knowledge and object-oriented conception, verify validity. X-Neuronet defines that a knowledge is directive combination with inertia and weights, and offers basic conceptions for expression, logic operator for operation and processing, node value and connection weight, propagation rule, learning algorithm.

  • PDF

Development of Insulation Sheet Materials and Their Sound Characterization

  • Ni, Qing-Qing;Lu, Enjie;Kurahashi, Naoya;Kurashiki, Ken;Kimura, Teruo
    • Advanced Composite Materials
    • /
    • v.17 no.1
    • /
    • pp.25-40
    • /
    • 2008
  • The research and development in soundproof materials for preventing noise have attracted great attention due to their social impact. Noise insulation materials are especially important in the field of soundproofing. Since the insulation ability of most materials follows a mass rule, the heavy weight materials like concrete, lead and steel board are mainly used in the current noise insulation materials. To overcome some weak points in these materials, fiber reinforced composite materials with lightweight and other high performance characteristics are now being used. In this paper, innovative insulation sheet materials with carbon and/or glass fabrics and nano-silica hybrid PU resin are developed. The parameters related to sound performance, such as materials and fabric texture in base fabric, hybrid method of resin, size of silica particle and so on, are investigated. At the same time, the wave analysis code (PZFlex) is used to simulate some of experimental results. As a result, it is found that both bundle density and fabric texture in the base fabrics play an important role on the soundproof performance. Compared with the effect of base fabrics, the transmission loss in sheet materials increased more than 10 dB even though the thickness of the sample was only about 0.7 mm. The results show different values of transmission loss factor when the diameters of silica particles in coating materials changed. It is understood that the effect of the soundproof performance is different due to the change of hybrid method and the size of silica particles. Fillers occupying appropriate positions and with optimum size may achieve a better effect in soundproof performance. The effect of the particle content on the soundproof performance is confirmed, but there is a limit for the addition of the fillers. The optimization of silica content for the improvement of the sound insulation effect is important. It is observed that nano-particles will have better effect on the high soundproof performance. The sound insulation effect has been understood through a comparison between the experimental and analytical results. It is confirmed that the time-domain finite wave analysis (PZFlex) is effective for the prediction and design of soundproof performance materials. Both experimental and analytical results indicate that the developed materials have advantages in lightweight, flexibility, other mechanical properties and excellent soundproof performance.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.