• Title/Summary/Keyword: Rule Base System

Search Result 399, Processing Time 0.026 seconds

Path Planning of Autonomous Guided Vehicle Using fuzzy Control & Genetic Algorithm (유전자 알고리즘과 퍼지 제어를 적용한 자율운송장치의 경로 계획)

  • Kim, Yong-Gug;Lee, Yun-Bae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.4 no.2
    • /
    • pp.397-406
    • /
    • 2000
  • Genetic algorithm is used as a means of search, optimization md machine learning, its structure is simple but it is applied to various areas. And it is about an active and effective controller which can flexibly prepare for changeable circumstances. For this study, research about an action base system evolving by itself is also being considered. There is to have a problem that depended entirely on heuristic knowledge of expert forming membership function and control rule for fuzzy controller design. In this paper, for forming the fuzzy control to perform self-organization, we tuned the membership function to the most optimal using a genetic algorithm(GA) and improved the control efficiency by the self-correction and generation of control rules.

  • PDF

The Look-up table Plus-Minus Tuning Method of Fuzzy Control Systems (퍼지제어 시스템의 제어값표 가감 동조방법)

  • Choi, Han-Soo;Jeong, Heon
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.3 no.4
    • /
    • pp.388-398
    • /
    • 1998
  • In constructing fuzzy control systems. there are many parameters such as rule base. membership functions. inference m method. defuzzification. and I/O scaling factors. To control the system in properly using fuzzy logic. we have to consider t the correlation with those parameters. This paper deals with self-tuning of fuzzy control systems. The fuzzy controller h has parameters that are input and output scaling factors to effect control output. And we propose the looklongleftarrowup table b based self-tuning fuzy controller. We propose the PMTM(Plus-Minus Tuning Method) for self tuning method, self-tuning the initial look-up table to the appropriate table by adding and subtracting the values.

  • PDF

Accounting Information Processing Model Using Big Data Mining (빅데이터마이닝을 이용한 회계정보처리 모형)

  • Kim, Kyung-Ihl
    • Journal of Convergence for Information Technology
    • /
    • v.10 no.7
    • /
    • pp.14-19
    • /
    • 2020
  • This study suggests an accounting information processing model based on internet standard XBRL which applies an extensible business reporting language, the XML technology. Due to the differences in document characteristics among various companies, this is very important with regard to the purpose of accounting that the system should provide useful information to the decision maker. This study develops a data mining model based on XML hierarchy which is stored as XBRL in the X-Hive data base. The data ming analysis is experimented by the data mining association rule. And based on XBRL, the DC-Apriori data mining method is suggested combining Apriori algorithm and X-query together. Finally, the validity and effectiveness of the suggested model is investigated through experiments.

Design of fuzzy digital PI+D controller using simplified indirect inference method (간편 간접추론방법을 이용한 퍼지 디지털 PI+D 제어기의 설계)

  • Chai, Chang-Hyun
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.1
    • /
    • pp.35-41
    • /
    • 2000
  • This paper describes the design of fuzzy digital PID controller using a simplified indirect inference method. First, the fuzzy digital PID controller is derived from the conventional continuous-time linear digital PID controller,. Then the fuzzification, control-rule base, and defuzzification using SIM in the design of the fuzzy controller are discussed in detail. The resulting controller is a discrete-time fuzzy version of the conventional PID controller, which has the same linear structure, but are nonlinear functions of the input signals. The proposed controller enhances the self-tuning control capability, particularly when the process to be controlled is nonlinear. When the SIIM is applied the fuzzy inference results can be calculated with splitting fuzzy variables into each action component and are determined as the functional form of corresponding variables. So the proposed method has the capability of the high speed inference and adapting with increasing the number of the fuzzy input variables easily. Computer simulation results have demonstrated that the proposed method provides better control performance than the one proposed by D. Misir et al.

  • PDF

Two-stage ML-based Group Detection for Direct-sequence CDMA Systems

  • Buzzi, Stefano;Lops, Marco
    • Journal of Communications and Networks
    • /
    • v.5 no.1
    • /
    • pp.33-42
    • /
    • 2003
  • In this paper a two-stage maximum-likelihood (ML) detection structure for group detection in DS/CDMA systems is presented. The first stage of the receiver is a linear filter, aimed at suppressing the effect of the unwanted (i.e., out-of-grout) users' signals, while the second stage is a non-linear block, implementing a ML detection rule on the set of desired users signals. As to the linear stage, we consider both the decorrelating and the minimum mean square error approaches. Interestingly, the proposed detection structure turns out to be a generalization of Varanasi's group detector, to which it reduces when the system is synchronous, the signatures are linerly independent and the first stage of the receiver is a decorrelator. The issue of blind adaptive receiver implementation is also considered, and implementations of the proposed receiver based on the LMS algorithm, the RLS algorithm and subspace-tracking algorithms are presented. These adaptive receivers do not rely on any knowledge on the out-of group users' signals, and are thus particularly suited for rejection of out-of-cell interference in the base station. Simulation results confirm that the proposed structure achieves very satisfactory performance in comparison with previously derived receivers, as well as that the proposed blind adaptive algorithms achieve satisfactory performance.

Design of Serendipity Service Based on Near Field Communication Technology (NFC 기반 세렌디피티 시스템 설계)

  • Lee, Kyoung-Jun;Hong, Sung-Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.293-304
    • /
    • 2011
  • The world of ubiquitous computing is one in which we will be surrounded by an ever-richer set of networked devices and services. Especially, mobile phone now becomes one of the key issues in ubiquitous computing environments. Mobile phones have been infecting our normal lives more thoroughly, and are the fastest technology in human history that has been adapted to people. In Korea, the number of mobile phones registered to the telecom company, is more than the population of the country. Last year, the numbers of mobile phone sold are many times more than the number of personal computer sold. The new advanced technology of mobile phone is now becoming the most concern on every field of technologies. The mix of wireless communication technology (wifi) and mobile phone (smart phone) has made a new world of ubiquitous computing and people can always access to the network anywhere, in high speed, and easily. In such a world, people cannot expect to have available to us specific applications that allow them to accomplish every conceivable combination of information that they might wish. They are willing to have information they want at easy way, and fast way, compared to the world we had before, where we had to have a desktop, cable connection, limited application, and limited speed to achieve what they want. Instead, now people can believe that many of their interactions will be through highly generic tools that allow end-user discovery, configuration, interconnection, and control of the devices around them. Serendipity is an application of the architecture that will help people to solve a concern of achieving their information. The word 'serendipity', introduced to scientific fields in eighteenth century, is the meaning of making new discoveries by accidents and sagacity. By combining to the field of ubiquitous computing and smart phone, it will change the way of achieving the information. Serendipity may enable professional practitioners to function more effectively in the unpredictable, dynamic environment that informs the reality of information seeking. This paper designs the Serendipity Service based on NFC (Near Field Communication) technology. When users of NFC smart phone get information and services by touching the NFC tags, serendipity service will be core services which will give an unexpected but valuable finding. This paper proposes the architecture, scenario and the interface of serendipity service using tag touch data, serendipity cases, serendipity rule base and user profile.

An Automatic Coding System of Korean Standard Industry/Occupation Code Using Example-based Learning (예제기반의 학습을 이용한 한국어 표준 산업/직업 자동 코딩 시스템)

  • Lim Heui-Seok
    • The Journal of the Korea Contents Association
    • /
    • v.5 no.4
    • /
    • pp.169-179
    • /
    • 2005
  • Standard industry and occupation code are usually assigned manually in Korean census. The manual coding is very labor intensive and expensive task. Furthermore, inconsistent coding is resulted from the ability of human experts and their working environments. This paper proposes an automatic code classification system which converts natural language responses on survey questionnaires into corresponding numeric codes by using manually constructed rule base and example-based machine learning. The system was trained with 400,000 records of which standard codes was assigned. It was evaluated with 10-fold cross validation and was tested with three code sets: population occupation set, industry set, and industry survey set. The proposed system showed 76.63%, 82.24 and 99.68% accuracy for each code set.

  • PDF

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

Automatic Generation of Web-based Expert Systems (웹 기반 전문가시스템의 자동생성체계)

  • 송용욱
    • Journal of Intelligence and Information Systems
    • /
    • v.6 no.1
    • /
    • pp.1-16
    • /
    • 2000
  • This paper analyzes the approaches of Web-based expert systems by comparing their pros and cons. and proposes a methodology of implementing the Web-based backward inference engines with reduced burden to Web servers. There are several alternatives to implement expert systems under the WWW environment : CGI, Web servers embedding inference engines external viewers Java Applets and HTML. Each of the alternatives have advantages and disadvantages of each own in terms of development and deployment testing scalability portability maintenance and mass service. Especially inference engines implemented using HTML possess relatively large number of advantages compared with those implemented using other techniques. This paper explains the methodology to present rules and variables for backward inference by HTML and JavaScript and suggests a framework for design and development of HTML-based Expert System. A methodology to convert a traditional rule base to an Experts Diagram and then generate a new HTML-based Expert System from the Experts Diagram is also addressed.

  • PDF

ITS : Intelligent Tissue Mineral Analysis Medical Information System (ITS : 지능적 Tissue Mineral Analysis 의료 정보 시스템)

  • Cho, Young-Im
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.15 no.2
    • /
    • pp.257-263
    • /
    • 2005
  • There are some problems in TMA. There are no databases in Korea which can be independently and specially analyzed the TMA results. Even there are some medical databases, some of them are low level databases which are related to TMA, so they can not serve medical services to patients as well as doctors. Moreover, TMA results are based on the database of american health and mineral standards, it is possibly mislead oriental, especially korean, mineral standards. The purposes of this paper is to develope the first Intelligent TMA Information System(ITS) which makes clear the problems mentioned earlier ITS can analyze TMA data with multiple stage decision tree classifier. It is also constructed with multiple fuzzy rule base and hence analyze the complex data from Korean database by fuzzy inference methods.