• Title/Summary/Keyword: Rule Extraction

Search Result 200, Processing Time 0.021 seconds

A Study of the extraction algorithm of the disaster sign data from web (재난 전조 정보 추출 알고리즘 연구)

  • Lee, Changyeol;Kim, Taehwan;Cha, Sangyeul
    • Journal of the Society of Disaster Information
    • /
    • v.7 no.2
    • /
    • pp.140-150
    • /
    • 2011
  • Life Environment is rapidly changing and large scale disasters are increasing from the global warming. Although the disaster repair resources are deployed to the disaster fields, the prevention of the disasters is the most effective countermeasures. the disaster sign data is based on the rule of Heinrich. Automatic extraction of the disaster sign data from the web is the focused issues in this paper. We defined the automatic extraction processes and applied information, such as accident nouns, disaster filtering nouns, disaster sign nouns and rules. Using the processes, we implemented the disaster sign data management system. In the future, the applied information must be continuously updated, because the information is only the extracted and analytic result from the some disaster data.

Effect of Rule Identification in Acquiring Rules from Web Pages (웹 페이지의 내재 규칙 습득 과정에서 규칙식별 역할에 대한 효과 분석)

  • Kang, Ju-Young;Lee, Jae-Kyu;Park, Sang-Un
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.1
    • /
    • pp.123-151
    • /
    • 2005
  • In the world of Web pages, there are oceans of documents in natural language texts and tables. To extract rules from Web pages and maintain consistency between them, we have developed the framework of XRML(extensible Rule Markup Language). XRML allows the identification of rules on Web pages and generates the identified rules automatically. For this purpose, we have designed the Rule Identification Markup Language (RIML) that is similar to the formal Rule Structure Markup Language (RSML), both as pares of XRML. RIML is designed to identify rules not only from texts, but also from tables on Web pages, and to transform to the formal rules in RSは syntax automatically. While designing RIML, we considered the features of sharing variables and values, omitted terms, and synonyms. Using these features, rules can be identified or changed once, automatically generating their corresponding RSML rules. We have conducted an experiment to evaluate the effect of the RIML approach with real world Web pages of Amazon.com, BamesandNoble.com, and Powells.com We found that $97.7\%$ of the rules can be detected on the Web pages, and the completeness of generated rule components is $88.5\%$. This is good proof that XRML can facilitate the extraction and maintenance of rules from Web pages while building expert systems in the Semantic Web environment.

  • PDF

A implementation and evaluation of Rule-Based Reverse-Engineering Tool (규칙기반 역공학 도구의 구현 및 평가)

  • Bae Jin Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.9 no.3
    • /
    • pp.135-141
    • /
    • 2004
  • With the diversified and enlarged softwares, the issue of software maintenance became more complex and difficult and consequently, the cost of software maintenance took up the highest portion in the software life cycle. We design Reverse Engineering Tool for software restructuring environment to object-oriented system. We design Rule - Based Reverse - Engineering using Class Information. We allow the maintainer to use interactive query by using Prolog language. We use similarity formula, which is based on relationship between variables and functions, in class extraction and restructuring method in order to extract most appropriate class. The visibility of the extracted class can be identified automatically. Also, we allow the maintainer to use query by using logical language. So We can help the practical maintenance. Therefore, The purpose of this paper is to suggest reverse engineering tool and evaluation reverse engineering tool.

  • PDF

Application of Market Basket Analysis to Personalized advertisements on Internet Storefront (인터넷 상점에서 개인화 광고를 위한 장바구니 분석 기법의 활용)

  • 김종우;이경미
    • Korean Management Science Review
    • /
    • v.17 no.3
    • /
    • pp.19-30
    • /
    • 2000
  • Customization and personalization services are considered as a critical success factor to be a successful Internet store or web service provider. As a representative personalization technique, personalized recommendation techniques are studied and commercialized to suggest products or services to a customer of Internet storefronts based on demographics of the customer or based on an analysis of the past purchasing behavior of the customer. The underlining theories of recommendation techniques are statistics, data mining, artificial intelligence, and/or rule-based matching. In the rule-based approach for personalized recommendation, marketing rules for personalization are usually collected from marketing experts and are used to inference with customers data. however, it is difficult to extract marketing rules from marketing experts, and also difficult to validate and to maintain the constructed knowledge base. In this paper, we proposed a marketing rule extraction technique for personalized recommendation on Internet storefronts using market basket analysis technique, a well-known data mining technique. Using marketing basket analysis technique, marketing rules for cross sales are extracted, and are used to provide personalized advertisement selection when a customer visits in an Internet store. An experiment has been performed to evaluate the effectiveness of proposed approach comparing with preference scoring approach and random selection.

  • PDF

Reduction of Fuzzy Rules and Membership Functions and Its Application to Fuzzy PI and PD Type Controllers

  • Chopra Seema;Mitra Ranajit;Kumar Vijay
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.4
    • /
    • pp.438-447
    • /
    • 2006
  • Fuzzy controller's design depends mainly on the rule base and membership functions over the controller's input and output ranges. This paper presents two different approaches to deal with these design issues. A simple and efficient approach; namely, Fuzzy Subtractive Clustering is used to identify the rule base needed to realize Fuzzy PI and PD type controllers. This technique provides a mechanism to obtain the reduced rule set covering the whole input/output space as well as membership functions for each input variable. But it is found that some membership functions projected from different clusters have high degree of similarity. The number of membership functions of each input variable is then reduced using a similarity measure. In this paper, the fuzzy subtractive clustering approach is shown to reduce 49 rules to 8 rules and number of membership functions to 4 and 6 for input variables (error and change in error) maintaining almost the same level of performance. Simulation on a wide range of linear and nonlinear processes is carried out and results are compared with fuzzy PI and PD type controllers without clustering in terms of several performance measures such as peak overshoot, settling time, rise time, integral absolute error (IAE) and integral-of-time multiplied absolute error (ITAE) and in each case the proposed schemes shows an identical performance.

Improvement of ECG P wave Detection Performance Using CIR(Contextusl Information Rule-base) Algorithm (Contextual information 을 이용한 P파 검출에 관한 연구)

  • 이지연;김익근
    • Journal of Biomedical Engineering Research
    • /
    • v.17 no.2
    • /
    • pp.235-240
    • /
    • 1996
  • The automated ECG diagnostic systems that are odd in hospitals have low performance of P-wave detection when faced with some diseases such as conduction block. So, the purpose of this study was the improvement of detection performance in conduction block which is low in P-wave detection. The first procedure was removal of baseline drift by subtracting the median filtered signal of 0.4 second length from the original signal. Then the algorithm detected R peak and T end point and cancelled the QRS-T complex to get'p prototypes'. Next step was magnification of P prototypes with dispersion and detection of'p candidates'in the magnified signal, and then extraction of contextual information concerned with P-waves. For the last procedure, the CIR was applied to P candidates to confirm P-waves. The rule base consisted of three rules that discriminate and confirm P-waves. This algorithm was evaluated using 500 patient's raw data P-wave detection perFormance was in- creased 6.8% compared with the QRS-T complex cancellation method without application of the rule base.

  • PDF

Automatic Malware Detection Rule Generation and Verification System (악성코드 침입탐지시스템 탐지규칙 자동생성 및 검증시스템)

  • Kim, Sungho;Lee, Suchul
    • Journal of Internet Computing and Services
    • /
    • v.20 no.2
    • /
    • pp.9-19
    • /
    • 2019
  • Service and users over the Internet are increasing rapidly. Cyber attacks are also increasing. As a result, information leakage and financial damage are occurring. Government, public agencies, and companies are using security systems that use signature-based detection rules to respond to known malicious codes. However, it takes a long time to generate and validate signature-based detection rules. In this paper, we propose and develop signature based detection rule generation and verification systems using the signature extraction scheme developed based on the LDA(latent Dirichlet allocation) algorithm and the traffic analysis technique. Experimental results show that detection rules are generated and verified much more quickly than before.

Trading rule extraction in stock market using the rough set approach

  • Kim, Kyoung-jae;Huh, Jin-nyoung;Ingoo Han
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.10a
    • /
    • pp.337-346
    • /
    • 1999
  • In this paper, we propose the rough set approach to extract trading rules able to discriminate between bullish and bearish markets in stock market. The rough set approach is very valuable to extract trading rules. First, it does not make any assumption about the distribution of the data. Second, it not only handles noise well, but also eliminates irrelevant factors. In addition, the rough set approach appropriate for detecting stock market timing because this approach does not generate the signal for trade when the pattern of market is uncertain. The experimental results are encouraging and prove the usefulness of the rough set approach for stock market analysis with respect to profitability.

  • PDF

ACA Based Image Steganography

  • Sarkar, Anindita;Nag, Amitava;Biswas, Sushanta;Sarkar, Partha Pratim
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.5
    • /
    • pp.266-276
    • /
    • 2013
  • LSB-based steganography is a simple and well known information hiding technique. In most LSB based techniques, a secret message is embedded into a specific position of LSB in the cover pixels. On the other hand, the main threat of LSB-based steganography is steganalysis. This paper proposes an asynchronous-cellular-automata(ACA)-based steganographic method, where secret bits are embedded into the selected position inside the cover pixel by ACA rule 51 and a secret key. As a result, it is very difficult for malicious users to retrieve a secret message from a cover image without knowing the secret key, even if the extraction algorithm is known. In addition, another layer of security is provided by almost random (rule-based) selection of a cover pixel for embedding using ACA and a different secret key. Finally, the experimental results show that the proposed method can be secured against the well-known steganalysis RS-attack.

  • PDF

Extraction of Fuzzy Rules from Data using Rough Set (Rough Set을 이용한 퍼지 규칙의 생성)

  • 조영완;노흥식;위성윤;이희진;박민용
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1996.10a
    • /
    • pp.327-332
    • /
    • 1996
  • Rough Set theory suggested by Pawlak has a property that it can describe the degree of relation between condition and decision attributes of data which don't have linguistic information. In this paper, by using this ability of rough set theory, we define a occupancy degree which is a measure can represent a degree of relational quantity between condition and decision attributes of data table. We also propose a method that can find an optimal fuzzy rule table and membership functions of input and output variables from data without linguistic information and examine the validity of the method by modeling data generated by fuzzy rule.

  • PDF