• Title/Summary/Keyword: Document schema

Search Result 182, Processing Time 0.03 seconds

XML-based Retrieval System for E-Learning Contents using mobile device PDA (모바일기기 PDA를 이용한 E-Learning Contents에 대한 XML기반 검색 시스템)

  • Park, Yong-Bin;Yang, Hae-Sool
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.10 no.4
    • /
    • pp.818-823
    • /
    • 2009
  • Web is greatly contributing in providing a variety of information. Especially, as media for the purpose of development and education of human resources, the role of web is important. Furthermore, E-Learning through web plays an important role for each enterprise and an educational institution. Also, above all, fast and various searches are required in order to manage and search a great number of educational contents in web. Therefore, most of present information is composed in HTML, so there are lots of restrictions. As a solution to such restriction, XML a standard of Web document, and its various search functions is being extended and studied variously. Moreover, any technology, AJAX, and the old and new technology has two sides. The technology already exists, and it was not even considered before, because new technology is combined technologies. AJAX is a lot of Web 2.0 and Web technologies complement are combined. This paper proposes a search system able to search XML, AJAX in E-Learning or various contents of non-XML.

Construction of a Verified Virtual NC Simulator for the Cutting Machines at Shipyard Using the Digital Manufacturing Technology (디지털 매뉴팩쳐링 기법을 이용한 절단기기의 검증된 가상 NC 시뮬레이터 구축)

  • Jung, Ho-Rim;Yim, Hyun-June;Lee, Jang-Hyun;Choi, Yang-Ryul;Kim, Ho-Gu;Shin, Jong-Gye
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.42 no.1 s.139
    • /
    • pp.64-72
    • /
    • 2005
  • Digital manufacturing is a technology to simulate the real manufacturing process using the virtual model representing the physical schema and the behavior of the real manufacturing system including resources, processes and product information. Therefore, it can optimize the manufacturing system or prevent the bottleneck processes through the simulation before the manufacturing plan is executed. This study presents a method to apply the digital manufacturing technology for the steel cutting process in shipyard. The system modeling of cutting shop is carried out using the IDEF and UML which is a visual modeling language to document the artifacts of a complex system. Also, virtual NC simulators of the cutting machines are constructed to emulate the real operation of cutting machines and NC codes. The simulators are able to verify the cutting shape and estimate the precise cycle time of the planned NC codes. The validity of the virtual model is checked by comparing the real cutting time and shape with the simulated results. It is expected that the virtual NC simulators can be used for accurate estimation of the cutting time and shape in advance of real cutting work.

A Supporting System for Developing Standard B2B Electronic Documents Based on UN/CEFACT Submission Forms (UN/CEFACT 제출 양식 기반의 기업간 표준 전자문서 개발 지원 시스템)

  • Ahn, Kyung-Lim;Park, Chan-Kwon;Kim, Hyoung-Do
    • The Journal of Society for e-Business Studies
    • /
    • v.11 no.4
    • /
    • pp.49-66
    • /
    • 2006
  • As business-to-business electronic commerce becomes activated, usage rate of standard electronic documents is rapidly increasing. Types and forms of standard documents exchanged between businesses have also been changed. Instead of EDI documents, mainly used in the initial phase, XML documents have been actively used recently. However, most framework standards for XML documents just specify basic syntax rules, messaging protocols, and standard documents. As a result, it has been usually difficult to procure efficiency and effectiveness in developing new standard electronic documents. Reflecting the experiences of developing UN/EDIFACT, UN/CEFACT provides a methodology and library for reusing standard data items as components when defining electronic documents. However, much additional effort is required for applying the methodology and library to the development process. In order to improve this situation, this paper proposes a system for supporting the development process by reusing various resources of registries/repositories, focusing on UN/CEFACT submission forms for standard electronic documents.

  • PDF

Design and Implementation of a Browser for Educational PDA Contents (교육용 PDA 컨텐츠 브라우저의 설계 및 구현)

  • 신재룡
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1223-1233
    • /
    • 2002
  • Recently various electronic books (I-Book) based on PDA (personal digital assistance) that we can easily use anytime and anywhere have been developed. Volume and weight of the E-Book is much less than that of traditional books. In that reason, it is easy to carry and serve us with contents by diverse functions such as searching, bookmark, dictionary, and playing of color image, sound or moving picture. On account of these advantages, many products connected with I-Book have been emerged in the market. However a product connected with educational contents is scarce, because it requires not only normal function but also additional functions such as a problem solving. Therefore it is actually necessary to develop a browser and an editor for educational contents. In this paper, we express educational contents by XML and define structure of document with XML schema. Then, we design and implement an editor and a browser that can manage educational contents on PDA.

Developing a Module to Store 3DF-GML Instance Documents in a Database (3DF-GML 인스턴스 문서의 데이터베이스 저장을 위한 모듈 개발)

  • Lee, Kang-Jae;Jang, Gun-Up;Lee, Ji-Yeong
    • Spatial Information Research
    • /
    • v.19 no.6
    • /
    • pp.87-100
    • /
    • 2011
  • Recently, a variety of GML application schemas have been designed in many fields. GML application schemas are specific to the application domain of interest and specify object types using primitive object types defined in the GML standard. GML instance documents are created based on such GML application schemas. The GML instance documents generally require large volumes to represent huge amounts of geographic objects. Thus, it is essential to store such GML instance documents in relational database for efficient management and use. Relational database is relatively convenient to use and is widely applied in various fields. Furthermore, it is fundamentally more efficient than file structure to handle large datasets. Many researches on storing GML documents have been carried out so far. However, there are few studies on storage of GML instance documents. Therefore, in this study, we developed the storage module to store the GML instance documents in relational database.

An XML Data Management System and Its Application to Genome Databases (XML 데이타 관리시스템과 유전체 데이타베이스에의 응용)

  • 이경희;김태경;김선신;이충세;조완섭
    • Journal of KIISE:Databases
    • /
    • v.31 no.4
    • /
    • pp.432-443
    • /
    • 2004
  • As the XML data has been widely used in the Internet, it is necessary to store and retrieve the XML data by using DBMSs. However, relational DBMSs suffer from the model difference between graph structure of the XML data and table forms in relational databases. We propose an ORDBMS-based DTD-dependent XML data management system Xing. Xing stores XML data in a DTD-dependent form in an object database. Since the object database schema has a graph structure and supports multi-valued attributes, mapping from an XML data model and queries into an object data model and OQLs is a simple problem. For rapid storing of large quantities of the XML data, we use SAX parser with customized Xing-tree which requires a small memory space compared with the DOM-tree. Xing also returns the query result in an XML document form. We have implemented the Xing system on top of UniSQL object-relational DBMS for the validity checking and performance comparison. For XML genome data from GenBank, and experimental evaluation shows that Xing can provide significant performance improvement (maximum 10 times) compared with the relational approach.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Efficient Linear Path Query Processing using Information Retrieval Techniques for Large-Scale Heterogeneous XML Documents (정보 검색 기술을 이용한 대규모 이질적인 XML 문서에 대한 효율적인 선형 경로 질의 처리)

  • 박영호;한욱신;황규영
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.540-552
    • /
    • 2004
  • We propose XIR-Linear, a novel method for processing partial match queries on large-scale heterogeneous XML documents using information retrieval (IR) techniques. XPath queries are written in path expressions on a tree structure representing an XML document. An XPath query in its major form is a partial match query. The objective of XIR-Linear is to efficiently support this type of queries for large-scale documents of heterogeneous schemas. XIR-Linear has its basis on the schema-level methods using relational tables and drastically improves their efficiency and scalability using an inverted index technique. The method indexes the labels in label paths as key words in texts, and allows for finding the label paths that match the queries far more efficiently than string match used in conventional methods. We demonstrate the efficiency and scalability of XIR-Linear by comparing it with XRel and XParent using XML documents crawled from the Internet. The results show that XIR-Linear is more efficient than both XRel and XParent by several orders of magnitude for linear path expressions as the number of XML documents increases.

Design and Implementation of XQL Query Processing System Using XQL-SQL Query Translation (XQL-SQL 질의 변환을 통한 XQL 질의 처리 시스템의 설계 및 구현)

  • Kim, Chun-Sig;Kim, Kyung-Won;Lee, Ji-Hun;Jang, Bo-Sun;Sohn, Ki-Rack
    • The KIPS Transactions:PartD
    • /
    • v.9D no.5
    • /
    • pp.789-800
    • /
    • 2002
  • XML is a standard format of web data and is currently used as a prevailing language for exchanging data. Most of the commercial data are stored in a relational database. It is quite important to convert these conventionally stored data into those for exchange and use them in data exchange, or to get the query results effectively by utilizing XQL on XML data which are store in a relational database. Thus, it is absolutely required to have a proper query processing mechanism for XML data and to maintain many XML data properly. Up to now, many cases of researches on the storage and retrieval of XML data have been carried out and under study. But, effective retrieval and storage system for path queries like XQL has yet to be contrived. Thus, in this paper, a schema to store XML data is designed, in which DFS-Numbegering method is used to store data effectively. And an effective path query processing method is also designed and implemented, in which a traditional relational database engine is used. That is, XQL is converted into SQL with a XQL processor if a user makes query XQL in a system. A database system executes SQL, and a XML generator uses a generated record and makes a XML document.

Digital Forensic Investigation of HBase (HBase에 대한 디지털 포렌식 조사 기법 연구)

  • Park, Aran;Jeong, Doowon;Lee, Sang Jin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.6 no.2
    • /
    • pp.95-104
    • /
    • 2017
  • As the technology in smart device is growing and Social Network Services(SNS) are becoming more common, the data which is difficult to be processed by existing RDBMS are increasing. As a result of this, NoSQL databases are getting popular as an alternative for processing massive and unstructured data generated in real time. The demand for the technique of digital investigation of NoSQL databases is increasing as the businesses introducing NoSQL database in their system are increasing, although the technique of digital investigation of databases has been researched centered on RDMBS. New techniques of digital forensic investigation are needed as NoSQL Database has no schema to normalize and the storage method differs depending on the type of database and operation environment. Research on document-based database of NoSQL has been done but it is not applicable as itself to other types of NoSQL Database. Therefore, the way of operation and data model, grasp of operation environment, collection and analysis of artifacts and recovery technique of deleted data in HBase which is a NoSQL column-based database are presented in this paper. Also the proposed technique of digital forensic investigation to HBase is verified by an experimental scenario.