• Title/Summary/Keyword: document storing

Search Result 72, Processing Time 0.036 seconds

Preparation of Soil Input Files to a Crop Model Using the Korean Soil Information System (흙토람 데이터베이스를 활용한 작물 모델의 토양입력자료 생성)

  • Yoo, Byoung Hyun;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.174-179
    • /
    • 2017
  • Soil parameters are required inputs to crop models, which estimate crop yield under a given environment condition. The Korean Soil Information System (KSIS), which provides detailed soil profile record of 390 soil series in the HTML (HyperText Markup Language) format, would be useful to prepare soil input files. Korean Soil Information System Processing Tool (KSISPT) was developed to aid generation of soil input data based on the KSIS database. Java was used to implement the tool that consists of a set of modules for parsing the HTML document of the KSIS, storing data required for preparing soil input file, calculating additional soil parameter, and writing soil input file to a local disk. Using the automated soil data preparation tool, about 940 soil input data were created for the DSSAT model and the ORYZA 2000 model, respectively. In combination with soil series distribution map at 30m resolution, spatial analysis of crop yield could be projected under climate change, which would help the development of adaptation strategies.

Efficient Deferred Incremental Refresh of XML Query Cache Using ORDBMS (ORDBMS를 사용한 XML 질의 캐쉬의 효율적인 지연 갱신)

  • Hwang Dae-Hyun;Kang Hyun-Chul
    • The KIPS Transactions:PartD
    • /
    • v.13D no.1 s.104
    • /
    • pp.11-22
    • /
    • 2006
  • As we are to deal with more and more XML documents, research on storing and managing XML documents in databases are actively conducted. Employing RDBMS or ORDBMS as a repository of XML documents is currently regarded as most practical. The query results out of XML documents stored in databases could be cached for query performance though the cost of cache consistency against the update of the underlying data is incurred. In this paper, we assume that an ORDBMS is used as a repository for the XML query cache as well as its underlying XML documents, and that XML query cache is refreshed in a deferred way with the update log. When the same XML document was updated multiple times, the deferred refresh of the XML query cache may Bet inefficient. We propose an algorithm that removes or filters such duplicate updates. Based on that, the optimal SQL statements that are to be executed for XML query cache consistency are generated. Through experiments, we show the efficiency of our proposed deferred refresh of XML query cache.

Developing a Module to Store 3DF-GML Instance Documents in a Database (3DF-GML 인스턴스 문서의 데이터베이스 저장을 위한 모듈 개발)

  • Lee, Kang-Jae;Jang, Gun-Up;Lee, Ji-Yeong
    • Spatial Information Research
    • /
    • v.19 no.6
    • /
    • pp.87-100
    • /
    • 2011
  • Recently, a variety of GML application schemas have been designed in many fields. GML application schemas are specific to the application domain of interest and specify object types using primitive object types defined in the GML standard. GML instance documents are created based on such GML application schemas. The GML instance documents generally require large volumes to represent huge amounts of geographic objects. Thus, it is essential to store such GML instance documents in relational database for efficient management and use. Relational database is relatively convenient to use and is widely applied in various fields. Furthermore, it is fundamentally more efficient than file structure to handle large datasets. Many researches on storing GML documents have been carried out so far. However, there are few studies on storage of GML instance documents. Therefore, in this study, we developed the storage module to store the GML instance documents in relational database.

An XML Data Management System and Its Application to Genome Databases (XML 데이타 관리시스템과 유전체 데이타베이스에의 응용)

  • 이경희;김태경;김선신;이충세;조완섭
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.432-443
    • /
    • 2004
  • As the XML data has been widely used in the Internet, it is necessary to store and retrieve the XML data by using DBMSs. However, relational DBMSs suffer from the model difference between graph structure of the XML data and table forms in relational databases. We propose an ORDBMS-based DTD-dependent XML data management system Xing. Xing stores XML data in a DTD-dependent form in an object database. Since the object database schema has a graph structure and supports multi-valued attributes, mapping from an XML data model and queries into an object data model and OQLs is a simple problem. For rapid storing of large quantities of the XML data, we use SAX parser with customized Xing-tree which requires a small memory space compared with the DOM-tree. Xing also returns the query result in an XML document form. We have implemented the Xing system on top of UniSQL object-relational DBMS for the validity checking and performance comparison. For XML genome data from GenBank, and experimental evaluation shows that Xing can provide significant performance improvement (maximum 10 times) compared with the relational approach.

Query Processing using Information of Parent Nodes in Partitioned Inverted Index Tables (분할된 역 인덱스 테이블에서 부모노드의 정보를 이용한 질의 처리)

  • Kim, Myung-Soo;Hwang, Byung-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.905-913
    • /
    • 2008
  • Many heterogeneous XML documents are being widely used with the increasing employment of XML, and the importance of data structure research for more efficient document management has been growing steadily. We propose a query processing technique which uses parent node information in a partitioned inverted index tree. The searching efficiency of these heterogeneous documents is greatly influenced by the number of query processing and the amount of target data sets in many ways. Therefore, considering these two factors is very important for designing a data structure. First, our technique stores parent node's information in an inverted index table. Then using this information, we can reduce the number of query processing by half. Also, the amount of target data sets can be lessoned by using partitioned inverted index table. Some XML documents collected from the Internet will be used to demonstrate the new method, and its high efficiency will be compared with some of the existing searching methods.

  • PDF

Text Data Analysis Model Based on Web Application (웹 애플리케이션 기반의 텍스트 데이터 분석 모델)

  • Jin, Go-Whan
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.11
    • /
    • pp.785-792
    • /
    • 2021
  • Since the Fourth Industrial Revolution, various changes have occurred in society as a whole due to advance in technologies such as artificial intelligence and big data. The amount of data that can be collect in the process of applying important technologies tends to increase rapidly. Especially in academia, existing generated literature data is analyzed in order to grasp research trends, and analysis of these literature organizes the research flow and organizes some research methodologies and themes, or by grasping the subjects that are currently being talked about in academia, we are making a lot of contributions to setting the direction of future research. However, it is difficult to access whether data collection is necessary for the analysis of document data without the expertise of ordinary programs. In this paper, propose a text mining-based topic modeling Web application model. Even if you lack specialized knowledge about data analysis methods through the proposed model, you can perform various tasks such as collecting, storing, and text-analyzing research papers, and researchers can analyze previous research and research trends. It is expect that the time and effort required for data analysis can be reduce order to understand.

Intelligent Character Recognition System for Account Payable by using SVM and RBF Kernel

  • Farooq, Muhammad Umer;Kazi, Abdul Karim;Latif, Mustafa;Alauddin, Shoaib;Kisa-e-Zehra, Kisa-e-Zehra;Baig, Mirza Adnan
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.213-221
    • /
    • 2022
  • Intelligent Character Recognition System for Account Payable (ICRS AP) Automation represents the process of capturing text from scanned invoices and extracting the key fields from invoices and storing the captured fields into properly structured document format. ICRS plays a very critical role in invoice data streamlining, we are interested in data like Vendor Name, Purchase Order Number, Due Date, Total Amount, Payee Name, etc. As companies attempt to cut costs and upgrade their processes, accounts payable (A/P) is an example of a paper-intensive procedure. Invoice processing is a possible candidate for digitization. Most of the companies dealing with an enormous number of invoices, these manual invoice matching procedures start to show their limitations. Receiving a paper invoice and matching it to a purchase order (PO) and general ledger (GL) code can be difficult for businesses. Lack of automation leads to more serious company issues such as accruals for financial close, excessive labor costs, and a lack of insight into corporate expenditures. The proposed system offers tighter control on their invoice processing to make a better and more appropriate decision. AP automation solutions provide tighter controls, quicker clearances, smart payments, and real-time access to transactional data, allowing financial managers to make better and wiser decisions for the bottom line of their organizations. An Intelligent Character Recognition System for AP Automation is a process of extricating fields like Vendor Name, Purchase Order Number, Due Date, Total Amount, Payee Name, etc. based on their x-axis and y-axis position coordinates.

X2RD: Storing and Querying XML Data Using XPath To Relational Database (X2RD: XPath를 이용한 XML 데이터의 관계형 데이터베이스로의 저장과 질의)

  • Oh, Sang-Yoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.3
    • /
    • pp.57-64
    • /
    • 2009
  • XML has become a do facto standard for structured document and data on the Web. An XML data deluge over the network will be more, since XML based standards such as Web Service and Semantic Web gets popular. There are efforts to store and query XML documents in a relational database system and recent efforts focus on how to provide such operations using XPath and XQuery. In this paper, we present study about those research efforts and we propose a new scheme to stoγe and query XML documents in a relational database using XPath query. The scheme uses a 'shred' method to store and translates XPath queries to SQL. We also present our empirical experiments using a RDBMS.

A Study on Object Recognition Technique based on Artificial Intelligence (인공지능 기반 객체인식 기법에 관한 연구)

  • Yang Hwan Seok
    • Convergence Security Journal
    • /
    • v.22 no.5
    • /
    • pp.3-9
    • /
    • 2022
  • Recently, in order to build a cyber physical system(CPS) that is a technology related to the 4th industry, the construction of the virtual control system for physical model and control circuit simulation is increasingly required in various industries. It takes a lot of time and money to convert documents that are not electronically documented through direct input. For this, it is very important to digitize a large number of drawings that have already been printed through object recognition using artificial intelligence. In this paper, in order to accurately recognize objects in drawings and to utilize them in various applications, a recognition technique using artificial intelligence by analyzing the characteristics of objects in drawing was proposed. In order to improve the performance of object recognition, each object was recognized and then an intermediate file storing the information was created. And the recognition rate of the next recognition target was improved by deleting the recognition result from the drawing. In addition, the recognition result was stored as a standardized format document so that it could be utilized in various fields of the control system. The excellent performance of the technique proposed in this paper was confirmed through the experiments.

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.