• Title/Summary/Keyword: Knowledge-Based Data Mining

Search Result 262, Processing Time 0.026 seconds

Financial Footnote Analysis for Financial Ratio Predictions based on Text-Mining Techniques (재무제표 주석의 텍스트 분석 통한 재무 비율 예측 향상 연구)

  • Choe, Hyoung-Gyu;Lee, Sang-Yong Tom
    • Knowledge Management Research
    • /
    • v.21 no.2
    • /
    • pp.177-196
    • /
    • 2020
  • Since the adoption of K-IFRS(Korean International Financial Reporting Standards), the amount of financial footnotes has been increased. However, due to the stereotypical phrase and the lack of conciseness, deriving the core information from footnotes is not really easy yet. To propose a solution for this problem, this study tried financial footnote analysis for financial ratio predictions based on text-mining techniques. Using the financial statements data from 2013 to 2018, we tried to predict the earning per share (EPS) of the following quarter. We found that measured prediction errors were significantly reduced when text-mined footnotes data were jointly used. We believe this result came from the fact that discretionary financial figures, which were hardly predicted with quantitative financial data, were more correlated with footnotes texts.

A Six Sigma Methodology Using Data Mining : A Case Study of "P" Steel Manufacturing Company (데이터 마이닝 기반의 6 시그마 방법론 : 철강산업 적용사례)

  • Jang, Gil-Sang
    • The Journal of Information Systems
    • /
    • v.20 no.3
    • /
    • pp.1-24
    • /
    • 2011
  • Recently, six sigma has been widely adopted in a variety of industries as a disciplined, data-driven problem solving approach or methodology supported by a handful of powerful statistical tools in order to reduce variation through continuous process improvement. Also, data mining has been widely used to discover unknown knowledge from a large volume of data using various modeling techniques such as neural network, decision tree, regression analysis, etc. This paper proposes a six sigma methodology based on data mining for effectively and efficiently processing massive data in driving six sigma projects. The proposed methodology is applied in the hot stove system which is a major energy-consuming process in a "P" steel company for improvement of heat efficiency through reduction of energy consumption. The results show optimal operation conditions and reduction of the hot stove energy cost by 15%.

Anonymizing Graphs Against Weight-based Attacks with Community Preservation

  • Li, Yidong;Shen, Hong
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.3
    • /
    • pp.197-209
    • /
    • 2011
  • The increasing popularity of graph data, such as social and online communities, has initiated a prolific research area in knowledge discovery and data mining. As more real-world graphs are released publicly, there is growing concern about privacy breaching for the entities involved. An adversary may reveal identities of individuals in a published graph, with the topological structure and/or basic graph properties as background knowledge. Many previous studies addressing such attacks as identity disclosure, however, concentrate on preserving privacy in simple graph data only. In this paper, we consider the identity disclosure problem in weighted graphs. The motivation is that, a weighted graph can introduce much more unique information than its simple version, which makes the disclosure easier. We first formalize a general anonymization model to deal with weight-based attacks. Then two concrete attacks are discussed based on weight properties of a graph, including the sum and the set of adjacent weights for each vertex. We also propose a complete solution for the weight anonymization problem to prevent a graph from both attacks. In addition, we also investigate the impact of the proposed methods on community detection, a very popular application in the graph mining field. Our approaches are efficient and practical, and have been validated by extensive experiments on both synthetic and real-world datasets.

Automated Conceptual Data Modeling Using Association Rule Mining (연관규칙 마이닝을 활용한 개념적 데이터베이스 설계 자동화 기법)

  • Son, Yoon-Ho;Kim, In-Kyu;Kim, Nam-Gyu
    • The Journal of Information Systems
    • /
    • v.18 no.4
    • /
    • pp.59-86
    • /
    • 2009
  • Data modeling can be regarded as a series of processes to abstract real-world business concerns. The conceptual modeling phase is often regarded as the most difficult stage in the entire modeling process, because quite different conceptual models may be produced even for similar business domains based on users' varying requirements and the data modelers' diverse perceptions of the requirements. This implies that an object considered as an entity in one domain may be considered as an attribute in another, and vice versa. However, many traditional knowledge-based automated database design systems unfortunately fail to construct appropriate Entity-Relationship Diagrams(ERDs) for a given set of requirements due to the rigid assumption that an object should be classified as an entity if it has been classified as an entity in previous applications. In this paper, we propose an alternative automation system which can generate ERDs from business descriptions using association rule mining technique. Our system can be differentiated from the traditional ones in that our system can perform data modeling only based on business description written by domain workers, rather than relying on any kind of knowledge base. Since the proposed system can produce various versions of ERDs from the same business descriptions simultaneously, users can have the opportunity to choose one of the ERDs as being the most appropriate, based on their business environment and requirements. We performed a case study for personnel management in a university to evaluate the practicability of the proposed system This paper summarizes the result of it in the experiment section.

PubMine: An Ontology-Based Text Mining System for Deducing Relationships among Biological Entities

  • Kim, Tae-Kyung;Oh, Jeong-Su;Ko, Gun-Hwan;Cho, Wan-Sup;Hou, Bo-Kyeng;Lee, Sang-Hyuk
    • Interdisciplinary Bio Central
    • /
    • v.3 no.2
    • /
    • pp.7.1-7.6
    • /
    • 2011
  • Background: Published manuscripts are the main source of biological knowledge. Since the manual examination is almost impossible due to the huge volume of literature data (approximately 19 million abstracts in PubMed), intelligent text mining systems are of great utility for knowledge discovery. However, most of current text mining tools have limited applicability because of i) providing abstract-based search rather than sentence-based search, ii) improper use or lack of ontology terms, iii) the design to be used for specific subjects, or iv) slow response time that hampers web services and real time applications. Results: We introduce an advanced text mining system called PubMine that supports intelligent knowledge discovery based on diverse bio-ontologies. PubMine improves query accuracy and flexibility with advanced search capabilities of fuzzy search, wildcard search, proximity search, range search, and the Boolean combinations. Furthermore, PubMine allows users to extract multi-dimensional relationships between genes, diseases, and chemical compounds by using OLAP (On-Line Analytical Processing) techniques. The HUGO gene symbols and the MeSH ontology for diseases, chemical compounds, and anatomy have been included in the current version of PubMine, which is freely available at http://pubmine.kobic.re.kr. Conclusions: PubMine is a unique bio-text mining system that provides flexible searches and analysis of biological entity relationships. We believe that PubMine would serve as a key bioinformatics utility due to its rapid response to enable web services for community and to the flexibility to accommodate general ontology.

Finding Frequent Itemsets based on Open Data Mining in Data Streams (데이터 스트림에서 개방 데이터 마이닝 기반의 빈발항목 탐색)

  • Chang, Joong-Hyuk;Lee, Won-Suk
    • The KIPS Transactions:PartD
    • /
    • v.10D no.3
    • /
    • pp.447-458
    • /
    • 2003
  • The basic assumption of conventional data mining methodology is that the data set of a knowledge discovery process should be fixed and available before the process can proceed. Consequently, this assumption is valid only when the static knowledge embedded in a specific data set is the target of data mining. In addition, a conventional data mining method requires considerable computing time to produce the result of mining from a large data set. Due to these reasons, it is almost impossible to apply the mining method to a realtime analysis task in a data stream where a new transaction is continuously generated and the up-to-dated result of data mining including the newly generated transaction is needed as quickly as possible. In this paper, a new mining concept, open data mining in a data stream, is proposed for this purpose. In open data mining, whenever each transaction is newly generated, the updated mining result of whole transactions including the newly generated transactions is obtained instantly. In order to implement this mechanism efficiently, it is necessary to incorporate the delayed-insertion of newly identified information in recent transactions as well as the pruning of insignificant information in the mining result of past transactions. The proposed algorithm is analyzed through a series of experiments in order to identify the various characteristics of the proposed algorithm.

Mobile Device and Virtual Storage-Based Approach to Automatically and Pervasively Acquire Knowledge in Dialogues (모바일 기기와 가상 스토리지 기술을 적용한 자동적 및 편재적 음성형 지식 획득)

  • Yoo, Kee-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.1-17
    • /
    • 2012
  • The Smartphone, one of essential mobile devices widely used recently, can be very effectively applied to capture knowledge on the spot by jointly applying the pervasive functionality of cloud computing. The process of knowledge capturing can be also effectively automated if the topic of knowledge is automatically identified. Therefore, this paper suggests an interdisciplinary approach to automatically acquire knowledge on the spot by combining technologies of text mining-based topic identification and cloud computing-based Smartphone. The Smartphone is used not only as the recorder to record knowledge possessor's dialogue which plays the role of the knowledge source, but also as the sensor to collect knowledge possessor's context data which characterize specific situations surrounding him or her. The support vector machine, one of well-known outperforming text mining algorithms, is applied to extract the topic of knowledge. By relating the topic and context data, a business rule can be formulated, and by aggregating the rule, the topic, context data, and the dictated dialogue, a set of knowledge is automatically acquired.

Genome data mining for everyone

  • Lee, Gir-Won;Kim, Sang-Soo
    • BMB Reports
    • /
    • v.41 no.11
    • /
    • pp.757-764
    • /
    • 2008
  • The genomic sequences of a huge number of species have been determined. Typically, these genome sequences and the associated annotation data are accessed through Internet-based genome browsers that offer a user-friendly interface. Intelligent use of the data should expedite biological knowledge discovery. Such activity is collectively called data mining and involves queries that can be simple, complex, and even combinational. Various tools have been developed to make genome data mining available to computational and experimental biologists alike. In this mini-review, some tools that have proven successful will be introduced along with examples taken from published reports.

Robustness of Data Mining Tools under Varting Levels of Noise:Case Study in Predicting a Chaotic Process

  • Kim, Steven H.;Lee, Churl-Min;Oh, Heung-Sik
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.23 no.1
    • /
    • pp.109-141
    • /
    • 1998
  • Many processes in the industrial realm exhibit sstochastic and nonlinear behavior. Consequently, an intelligent system must be able to nonlinear production processes as well as probabilistic phenomena. In order for a knowledge based system to control a manufacturing processes as well as probabilistic phenomena. In order for a knowledge based system to control manufacturing process, an important capability is that of prediction : forecasting the future trajectory of a process as well as the consequences of the control action. This paper examines the robustness of data mining tools under varying levels of noise while predicting nonlinear processes, includinb chaotic behavior. The evaluated models include the perceptron neural network using backpropagation (BPN), the recurrent neural network (RNN) and case based reasoning (CBR). The concepts are crystallized through a case study in predicting a chaotic process in the presence of various patterns of noise.

  • PDF

Towards Effective Analysis and Tracking of Mozilla and Eclipse Defects using Machine Learning Models based on Bugs Data

  • Hassan, Zohaib;Iqbal, Naeem;Zaman, Abnash
    • Soft Computing and Machine Intelligence
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2021
  • Analysis and Tracking of bug reports is a challenging field in software repositories mining. It is one of the fundamental ways to explores a large amount of data acquired from defect tracking systems to discover patterns and valuable knowledge about the process of bug triaging. Furthermore, bug data is publically accessible and available of the following systems, such as Bugzilla and JIRA. Moreover, with robust machine learning (ML) techniques, it is quite possible to process and analyze a massive amount of data for extracting underlying patterns, knowledge, and insights. Therefore, it is an interesting area to propose innovative and robust solutions to analyze and track bug reports originating from different open source projects, including Mozilla and Eclipse. This research study presents an ML-based classification model to analyze and track bug defects for enhancing software engineering management (SEM) processes. In this work, Artificial Neural Network (ANN) and Naive Bayesian (NB) classifiers are implemented using open-source bug datasets, such as Mozilla and Eclipse. Furthermore, different evaluation measures are employed to analyze and evaluate the experimental results. Moreover, a comparative analysis is given to compare the experimental results of ANN with NB. The experimental results indicate that the ANN achieved high accuracy compared to the NB. The proposed research study will enhance SEM processes and contribute to the body of knowledge of the data mining field.