• Title/Summary/Keyword: 스키마 추출

Search Result 150, Processing Time 0.023 seconds

Explanation-based Data Mining in Data Warehouse (데이터 웨어하우스 환경에서의 설명기반 데이터 마이닝)

  • 김현수;이창호
    • Proceedings of the Korea Inteligent Information System Society Conference
    • /
    • 1999.03a
    • /
    • pp.115-123
    • /
    • 1999
  • 산업계 전반에 걸친 오랜 정보시스템 운용의 결과로 대용량의 데이터들이 축적되고 있다. 이러한 데이터로부터 유용한 지식을 추출하기 위해 여러 가지 데이터 마이닝 기법들이 연구되어왔다. 특히 데이터 웨어하우스의 등장은 이러한 데이터 마이닝에 있어 필요한 데이터 제공 환경을 제공해 주고 있다. 그러나 전문가의 적절한 판단과 해석을 거치지 않은 데이터 마이닝의 결과는 당연한 사실이거나, 사실과 다른 가짜이거나 또는 관련성 없는(trivial, spurious and irrelevant)내용만 무수히 쏟아낼 수 있다. 그러므로 데이터 마이닝의 결과가 비록 통계적 유의성을 가진다 하더라도 그 정당성과 유용성에 대한 검증과정과 방법론의 정립이 필요하다. 데이터 마이닝의 가장 어려운 점은 귀납적 오류를 없애기 위해 사람이 직접 그 결과를 해석하고 판단하며 아울러 새로운 탐색 방향을 제시해야 한다는 것이다. 본 논문에서는 데이터 마이닝 기법 중 연관규칙탐사로 얻어진 결과를 설명가능성 여부의 판단을 통해 검증하는 기법을 제안하며, 이를 통해 얻어진 검증된 지식을 토대로 일반화를 통한 새로운 가설을 생성하여 데이터 웨어하우스로부터 연관규칙을 검증하는 일련의 아텍쳐(architecture)를 제시하고다 한다. 먼저 데이터 마이닝 결과에 대한 설명의 필요성을 제시하고, 데이터 웨어하우스와 데이터 마이닝 기법들에 대한 간략한 설명과 연관규칙탐사에 대한 정의 및 방법을 보이고, 대상 영역에 대한 데이터 웨어하우스으 스키마를 보였다. 다음으로 도메인 지식(domain knowledge)과 연관규칙탐사를 통해 얻어진 결과를 표현하기위한 지식표현 방법으로 Relational Predicate Logic을 제안하였다. 연관규칙탐사로 얻어진 결과를 설명하기 위한 방법으로는 연관규칙탐사로 얻어진 연관규칙에 대해 Relational Predicate Logic으로 표현된 도메인 지식으로서 설명됨을 보이게 한다. 또한 이러한 설명(explanation)을 토대로 검증된 지식을 일반화하여 새로운 가설을 연역적으로 생성하고 이를 연관규칙탐사를 통해 검증한 후 새로운 지식을 얻는 반복적인 Explanation-based Data Mining Architecture를 제시하였다. 본 연구의 의의로는 데이터 마이닝을 통한 귀납적 지식생성에 있어 귀납적 오류의 발생을 도메인 지식을 통해 설명가능 함을 보임으로 검증하고 아울러 이러한 설명을 통해 연역적으로 새로운 가설지식을 생성시켜 이를 가설검증방식으로 검증함으로써 귀납적 접근과 연역적 접근의 통합 데이터 마이닝 접근을 제시하였다는데 있다.

  • PDF

XML Schema Model of Great Staff Music Score using the Integration Method (통합 방식을 이용한 대보표 악보의 XML 스키마 모델)

  • 김정희;곽호영
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.7 no.2
    • /
    • pp.302-313
    • /
    • 2003
  • Currently, DTD(Document Type Definition) Definition of Music score has been widely studied according to applications, and the methods of automatic transformation from defined DTD to XML Schema is in progress. In addition, studies of structure of DTD definition are focused on the expression of music information by individual format. In this paper, expression method of the music information by continuous string values is suggested using the fact that measure is basically a component of score, and XML Schema is also modelled. In addition, mechanism extracting the music information from XML instance which was expressed using the proposed method is presented. As a result, XML Schema taking the continuous string values could be defined, instance obtained by the proposed method results in increasing efficiency by simplicity of XPATH and reduction of search step compared to previous method. In addition, it is possible for human to make direct expression, and it is known that the instance size decreases.

Comparison and Analyzing System for Protein Tertiary Structure Database expands LOCK (LOCK을 확장한 3차원 단백질 구조비교 및 분석시스템의 설계 및 구현)

  • Jung Kwang Su;Han Yu;Park Sung Hee;Ryu Keun Ho
    • The KIPS Transactions:PartD
    • /
    • v.12D no.2 s.98
    • /
    • pp.247-258
    • /
    • 2005
  • Protein structure is highly related to its function and comparing protein structure is very important to identify structural motif, family and their function. In this paper, we construct an integrated database system which has all the protein structure data and their literature. The structure queries from the web interface are compared with the target structures in database, and the results are shown to the user for future analysis. To constructs this system, we analyze the Flat-File of Protein Data Bank. Then we select the necessary structure data and store as a new formatted data. The literature data related to these structures are stored in a relational database to query the my kinds of data easily In our structure comparison system, the structure of matched pattern and RMSD valure are calculated, then they are showed to the user with their relational documentation data. This system provides the more quick comparison and nice analyzing environment.

Object-Oriented Database Schemata and Queiy Processing for XML Data (XML 데이타를 위한 객체지향 데이터베이스 스키마 및 질의 처리)

  • Jeong, Tae-Seon;Park, Sang-Won;Han, Sang-Yeong;Kim, Hyeong-Ju
    • Journal of KIISE:Databases
    • /
    • v.29 no.2
    • /
    • pp.89-98
    • /
    • 2002
  • As XML has become an emerging standard for information exchange on the World Wide Web it has gained attention in database communities to extract information from XML seen as a database model. Recently, many researchers have addressed the problem of storing XML data and processing XML queries using traditional database engines. Here, most of them have used relational database systems. In this paper, we show that OODBSs can be another solution. Our technique generates an OODB schema from DTDs and processes XML queries, Especially, we show that the semi-structural part of XML data can be represented by the 'inheritance' and that this can be used to improve query processing.

A Study on the Method of Extracting Shape and Attribute Information for Port IFC Viewing (항만 IFC Viewing을 위한 형상 및 속성 정보 추출 방법에 관한 연구)

  • Kim, Keun-Ho;Park, Nam-Kyu;Joo, Cheol-Beom;Kim, Sung-Hoon
    • Journal of KIBIM
    • /
    • v.11 no.3
    • /
    • pp.67-74
    • /
    • 2021
  • An IFC file is dependent on the IFC schema. Because of this relationship, most IFC-using software reads and interprets the IFC File by employing an early binding method, which uses a standard IFC schema. In the case of most open sources, early binding methods using standard IFC schema have a problem that they cannot express extra information of IFC File out of extended IFC schema. Also, in the case of previous studies, they suggested schema extension, such as adding attribute information to the schema, rather than the interpretation of IFC File. This study research on method of extracting shape and attribute information was conducted by analyzing the IFC File produced through the Port schema, which is an extended IFC schema. Three objects were created using the reference relationship between the Port schema definition and the IFC entity, and, at the end, the three objects were combined into one object. It has been confirmed that the shape and property data were express properly while delivering the combined object to the viewer. The process is possible because of the method of matching IFC schema and IFC File, which is dependent on IFC schema but not early binding method. However, this method has some drawbacks, such that contemporaneously generated many objects consume many memory spaces. Future research to investigate that issue further is needed.

Development of Information Extraction System from Multi Source Unstructured Documents for Knowledge Base Expansion (지식베이스 확장을 위한 멀티소스 비정형 문서에서의 정보 추출 시스템의 개발)

  • Choi, Hyunseung;Kim, Mintae;Kim, Wooju;Shin, Dongwook;Lee, Yong Hun
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.111-136
    • /
    • 2018
  • In this paper, we propose a methodology to extract answer information about queries from various types of unstructured documents collected from multi-sources existing on web in order to expand knowledge base. The proposed methodology is divided into the following steps. 1) Collect relevant documents from Wikipedia, Naver encyclopedia, and Naver news sources for "subject-predicate" separated queries and classify the proper documents. 2) Determine whether the sentence is suitable for extracting information and derive the confidence. 3) Based on the predicate feature, extract the information in the proper sentence and derive the overall confidence of the information extraction result. In order to evaluate the performance of the information extraction system, we selected 400 queries from the artificial intelligence speaker of SK-Telecom. Compared with the baseline model, it is confirmed that it shows higher performance index than the existing model. The contribution of this study is that we develop a sequence tagging model based on bi-directional LSTM-CRF using the predicate feature of the query, with this we developed a robust model that can maintain high recall performance even in various types of unstructured documents collected from multiple sources. The problem of information extraction for knowledge base extension should take into account heterogeneous characteristics of source-specific document types. The proposed methodology proved to extract information effectively from various types of unstructured documents compared to the baseline model. There is a limitation in previous research that the performance is poor when extracting information about the document type that is different from the training data. In addition, this study can prevent unnecessary information extraction attempts from the documents that do not include the answer information through the process for predicting the suitability of information extraction of documents and sentences before the information extraction step. It is meaningful that we provided a method that precision performance can be maintained even in actual web environment. The information extraction problem for the knowledge base expansion has the characteristic that it can not guarantee whether the document includes the correct answer because it is aimed at the unstructured document existing in the real web. When the question answering is performed on a real web, previous machine reading comprehension studies has a limitation that it shows a low level of precision because it frequently attempts to extract an answer even in a document in which there is no correct answer. The policy that predicts the suitability of document and sentence information extraction is meaningful in that it contributes to maintaining the performance of information extraction even in real web environment. The limitations of this study and future research directions are as follows. First, it is a problem related to data preprocessing. In this study, the unit of knowledge extraction is classified through the morphological analysis based on the open source Konlpy python package, and the information extraction result can be improperly performed because morphological analysis is not performed properly. To enhance the performance of information extraction results, it is necessary to develop an advanced morpheme analyzer. Second, it is a problem of entity ambiguity. The information extraction system of this study can not distinguish the same name that has different intention. If several people with the same name appear in the news, the system may not extract information about the intended query. In future research, it is necessary to take measures to identify the person with the same name. Third, it is a problem of evaluation query data. In this study, we selected 400 of user queries collected from SK Telecom 's interactive artificial intelligent speaker to evaluate the performance of the information extraction system. n this study, we developed evaluation data set using 800 documents (400 questions * 7 articles per question (1 Wikipedia, 3 Naver encyclopedia, 3 Naver news) by judging whether a correct answer is included or not. To ensure the external validity of the study, it is desirable to use more queries to determine the performance of the system. This is a costly activity that must be done manually. Future research needs to evaluate the system for more queries. It is also necessary to develop a Korean benchmark data set of information extraction system for queries from multi-source web documents to build an environment that can evaluate the results more objectively.

An Extension of the DBMax for Data Warehouse Performance Administration (데이터 웨어하우스 성능 관리를 위한 DBMax의 확장)

  • Kim, Eun-Ju;Young, Hwan-Seung;Lee, Sang-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.407-416
    • /
    • 2003
  • As the usage of database systems dramatically increases and the amount of data pouring into them is massive, the performance administration techniques for using database systems effectively are getting more important. Especially in data warehouses, the performance management is much more significant mainly because of large volume of data and complex queries. The objectives and characteristics of data warehouses are different from those of other operational systems so adequate techniques for performance monitoring and tuning are needed. In this paper we extend functionalities of the DBMax, a performance administration tool for Oracle database systems, to apply it to data warehouse systems. First we analyze requirements based on summary management and ETL functions they are supported for data warehouse performance improvement in Oracle 9i. Then, we design architecture for extending DBMax functionalities and implement it. In specifics, we support SQL tuning by providing details of schema objects for summary management and ETL processes and statistics information. Also we provide new function that advises useful materialized views on workload extracted from DBMax log files and analyze usage of existing materialized views.

Automatic Generation of DB Images for Testing Enterprise Systems (전사적 응용시스템 테스트를 위한 DB이미지 생성에 관한 연구)

  • Kwon, Oh-Seung;Hong, Sa-Neung
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.37-58
    • /
    • 2011
  • In general, testing DB applications is much more difficult than testing other types of software. The fact that the DB states as much as the input data influence and determine the procedures and results of program testing is one of the decisive reasons for the difficulties. In order to create and maintain proper DB states for testing, it not only takes a lot of time and efforts, but also requires extensive IT expertise and business knowledge. Despite the difficulties, there are not enough research and tools for the needed help. This article reports the result of research on automatic creation and maintenance of DB states for testing DB applications. As its core, this investigation develops an automation tool which collects relevant information from a variety of sources such as log, schema, tables and messages, combines collected information intelligently, and creates pre- and post-Images of database tables proper for application tests. The proposed procedures and tool are expected to be greatly helpful for overcoming inefficiencies and difficulties in not just unit and integration tests but including regression tests. Practically, the tool and procedures proposed in this research allows developers to improve their productivity by reducing time and effort required for creating and maintaining appropriate DB sates, and enhances the quality of DB applications since they are conducive to a wider variety of test cases and support regression tests. Academically, this research deepens our understanding and introduces new approach to testing enterprise systems by analyzing patterns of SQL usages and defining a grammar to express and process the patterns.

A 3-Layered Information Integration System based on MDRs End Ontology (MDR과 온톨로지를 결합한 3계층 정보 통합 시스템)

  • Baik, Doo-Kwon;Choi, Yo-Han;Park, Sung-Kong;Lee, Jeong-Oog;Jeong, Dong-Won
    • The KIPS Transactions:PartD
    • /
    • v.10D no.2
    • /
    • pp.247-260
    • /
    • 2003
  • To share and standardize information, especially in the database environments, MDR (Metadata Registry) can be used to integrate various heterogeneous databases within a particular domain. But due to the discrepancies of data element representation between organizations, global information integration is not so easy. And users who are searching integrated information on the Web have limitation to obtain schema information for the underlying source databases. To solve those problems, in this paper, we present a 3-layered Information Integration System (LI2S) based on MDRs and Ontology. The purpose of proposed architecture is to define information integration model, which combine both of the nature of MDRs standard specification and functionality of ontology for the concept and relation. Adopting agent technology to the proposed model plays a key role to support the hierarchical and independent information integration architecture. Ontology is used as for a role of semantic network from which it extracts concept from the user query and the establishment of relationship between MDRs for the data element. (MDR and Knowledge Base are used as for the solution of discrepancies of data element representation between MDRs. Based on this architectural concept, LI2S was designed and implemented.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.