• Title/Summary/Keyword: Data Definition Language

Search Result 95, Processing Time 0.023 seconds

Transformation Methodology to Logical Model from Conceptual Model of XML (XML의 개념적 모델로부터 논리적 모델로의 변환 기법)

  • Kim, Young-Ung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.16 no.6
    • /
    • pp.305-310
    • /
    • 2016
  • In these days, XML is a de facto standard language for representing and exchanging data. In order to define the conceptual model of the XML, we need to define the representation rules expressed in the diagram and propose the transformation algorithm that converts the diagram into a logical model of XML. This paper proposes a transformation methodology for generating a logical model from the conceptual model of the XML. We use CMXML as a conceptual model and generate XML schema definition as a logical model. For this, we define transformation rules and data structures for XML schema, and propose a transformation algorithm.

An Extension of SWCL to Represent Logical Implication Knowledge under Semantic Web Environment (의미웹 환경에서 조건부함축 제약 지식표현을 위한 SWCL의 확장)

  • Kim, Hak-Jin
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.39 no.3
    • /
    • pp.7-22
    • /
    • 2014
  • By the publications of RDF and OWL, the Semantic Web is confirmed as a technology through which information in the Internet can be processed by machines. The focus of the Semantic Web study after then has moved to how to provide more useful information to users for their decision making beyond simple use of the structured data in ontologies. SWRL that makes logical inference possible by rules, and SWCL that formulates constraints under the Semantic Web environment are some of many efforts toward the achievement of that goal. Constraint represents a connection or a relationship between individual data in ontology. Based on SWCL, this paper tries to extend the language by adding one more type of constraint, implication constaint, in its repertoire. When users use binary variables to represent logical relationships in mathematical models, it requires and knowledge on the solver to solve the models. The use of implication constraint ease this difficulty. Its need, definition and relevant technical description is presented by the use of the optimal common attribute selection problem in product design.

Sensor Data Processing System for USN Practical Service (USN 응용서비스를 위한 센서 데이터 처리 시스템)

  • Lee, Sang-Jo;Kim, Yong-Woon;Yoo, Sang-Keun;Kim, Hyoung-Jun;Jung, Hoe-Kyung
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.699-702
    • /
    • 2007
  • As ubiquitous environment rapidly emerges due to the development of network and information communication technology, ubiquitous computing is being noticed as a technology that will take the lead in information technology industry of the future. To the end, the data recognized from each sensors, collected, and processed need be transferred to applied service so that they may be used as data for providing service to users. But to send sensors data has weak points that are absence of applicate data accessibility and inter-operablities against difference platform and protocol, deficiency of meta-data and interface against sensors. In this paper, we designed and implemented sensors data processing system that send sensors data to application service via web service with definition against description language describe sensors service and data.

  • PDF

Suffix Array Based Path Query Processing Scheme for Semantic Web Data (시맨틱 웹 데이터에서 접미사 배열 기반의 경로 질의 처리 기법)

  • Kim, Sung-Wan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.107-116
    • /
    • 2012
  • The applying of semantic technologies that aim to let computers understand and automatically process the meaning of the interlinked data on the Web is spreading. In Semantic Web, understanding and accessing the associations between data that is, the meaning between data as well as accessing to the data itself is important. W3C recommended RDF (Resource Description Framework) as a standard format to represent both Semantic Web data and their associations and also proposed several RDF query languages in order to support query processing for RDF data. However further researches on the query language definition considering the semantic associations and query processing techniques are still required. In this paper, using the suffix array-based indexing scheme previously introduced for RDF query processing, we propose a query processing approach to handle ${\rho}$-path query which is the representative type of semantic associations. To evaluate the query processing performance of the proposed approach, we implemented two different types of query processing approaches and measured the average query processing times. The experiments show that the proposed approach achieved 1.8 to 2.5 and 3.8 to 11 times better performance respectively than others two.

Managing and Modeling Strategy of Geo-features in Web-based 3D GIS

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the KSRS Conference
    • /
    • 1999.11a
    • /
    • pp.75-79
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3B GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a file format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(eXtensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for. users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF

Managing Scheme for 3-dimensional Geo-features using XML

  • Kim, Kyong-Ho;Choe, Seung-Keol;Lee, Jong-Hun;Yang, Young-Kyu
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 1999.12a
    • /
    • pp.47-51
    • /
    • 1999
  • Geo-features play a key role in object-oriented or feature-based geo-processing system. So the strategy for how-to-model and how-to-manage the geo-features builds the main architecture of the entire system and also supports the efficiency and functionality of the system. Unlike the conventional 2D geo-processing system, geo-features in 3D GIS have lots to be considered to model regarding the efficient manipulation and analysis and visualization. When the system is running on the Web, it should also be considered that how to leverage the level of detail and the level of automation of modeling in addition to the support for client side data interoperability. We built a set of 3D geo-features, and each geo-feature contains a set of aspatial data and 3D geo-primitives. The 3D geo-primitives contain the fundamental modeling data such as the height of building and the burial depth of gas pipeline. We separated the additional modeling data on the geometry and appearance of the model from the fundamental modeling data to make the table in database more concise and to allow the users more freedom to represent the geo-object. To get the users to build and exchange their own data, we devised a fie format called VGFF 2.0 which stands for Virtual GIS File Format. It is to describe the three dimensional geo-information in XML(extensible Markup Language). The DTD(Document Type Definition) of VGFF 2.0 is parsed using the DOM(Document Object Model). We also developed the authoring tools for users can make their own 3D geo-features and model and save the data to VGFF 2.0 format. We are now expecting the VGFF 2.0 evolve to the 3D version of SVG(Scalable Vector Graphics) especially for 3D GIS on the Web.

  • PDF

Design of Standard Data Model for the Informatization of Signboards (간판의 정보화를 위한 표준 데이터 모델 설계)

  • Kwon, Sang Il;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.38 no.3
    • /
    • pp.197-209
    • /
    • 2020
  • Signboards are installed in different types and sizes depending on the shop characteristics. However, the local government is having difficulty managing signboards with frequent opening and closing of stores and insufficient management personnel. In this study, a methodology was proposed to standardize and efficiently manage signboard information. To this end, the signboard display method of the enforcement ordinance related to outdoor advertising was analyzed to define the attribute elements of standard signboard data. In addition, physical information of signboards was obtained through signboard recognition technology, which is a prior study, and attribute elements of signboard standard data were defined through information that can be read with the naked eye, building integration information of the Ministry of the Interior and Safety, and street name address. In order to standardize the signboard information by spatial characteristics, data product specifications and metadata were defined according to the national spatial information standard. Lastly, standard data for signboards were produced in XML (Extensible Markup Language) format for compatibility, and XSD (XML Schema Definition) was defined for XML integrity so that data validity could be verified. Through this, a standard data model for the informatization of signboards was designed.

Study on the Standardization of Korean Distribution Terminology through its Usage Survey (유통분야 전문용어 사용실태 조사를 통한 용어 표준화 연구)

  • Han, Kyu-Chul;Lee, Sang-Youn
    • Journal of Distribution Science
    • /
    • v.13 no.4
    • /
    • pp.77-87
    • /
    • 2015
  • Purpose - This study aims to investigate the current state of distribution terminology usage by retailers and consumers nationwide, and to suggest a practical improvement plan for its standardization. The Korean distribution industry is closely related to consumers' daily lives. However, in reality, there exists a gap among producers, distributors, and consumers in terms of the definition, understanding, and perception of the terminology. Therefore, standardizing this terminology is essential for more smooth communication. This paper suggests the necessity of committing overall research and survey activities to the actual conditions of using Korean distribution terminology by organizations and their respective management situations, and further, the necessity of probing the problem and its measures in line with the objective and mission of the "Fundamental Law of the Korean Language." Research design, data, and methodology - This study's scope is limited to wholesale and retail including some information systems. First, the study covers most written material including lexicons and glossary of distribution terminology, university textbooks and teaching material for national certificate of qualification, and related laws and ordinances. Second, the survey covers retailers' management situations by store format. The retailers used as the sample for the survey include department stores, discount stores, SSM, and convenience stores. Altogether, 20 specialists were interviewed in their respective sectors or retail formats. Finally, the project team surveyed a sample of 1,300 consumers nationwide on 50 distribution terms mainly used by consumers, including those about awareness, understanding, usage, and attitude. Results - In total, 1,249 terms are drawn through literature research including distribution terminology used in the related literature, glossary and lexicons, distribution terminology in textbooks, and legal terminology. A classified table comprises four large categories including general distribution, distribution marketing, distribution information, and merchandise. The results of the three-step research including literature survey, field survey of retailers, and consumer survey were advised to be screened by academia (retail associations, faculty etc.), retailers (major retail management by store format), retail specialists and consultants, consumers, and Korean linguists. In total, 1,300 questionnaires for 50 terms of the distribution terminology closely associated with consumers were distributed to subjects nationwide. Conclusions - The desired and expected results from this study are summarized from three perspectives as follows: First, from retailers' perspective, a new concept, or coinage of new terms of the distribution industry stems from advanced countries such as America and Europe. However, the original meaning and definition are diluted and distorted with changes in the language users' situations and context. This study provides basic guidelines for standardization of distribution terms used among various retail formats in most daily life situations that consumers encounter. Second, from the nation's perspective, this study suggests optimal choices of distribution terminology in the context of laws and ordinances regarding concerned Ministries. Last, from the consumers' perspective, this paper enables consumers to understand and use distribution terms properly in their daily life.

A Korean Homonym Disambiguation System Based on Statistical, Model Using weights

  • Kim, Jun-Su;Lee, Wang-Woo;Kim, Chang-Hwan;Ock, Cheol-young
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.166-176
    • /
    • 2002
  • A homonym could be disambiguated by another words in the context as nouns, predicates used with the homonym. This paper using semantic information (co-occurrence data) obtained from definitions of part of speech (POS) tagged UMRD-S$^1$), In this research, we have analyzed the result of an experiment on a homonym disambiguation system based on statistical model, to which Bayes'theorem is applied, and suggested a model established of the weight of sense rate and the weight of distance to the adjacent words to improve the accuracy. The result of applying the homonym disambiguation system using semantic information to disambiguating homonyms appearing on the dictionary definition sentences showed average accuracy of 98.32% with regard to the most frequent 200 homonyms. We selected 49 (31 substantives and 18 predicates) out of the 200 homonyms that were used in the experiment, and performed an experiment on 50,703 sentences extracted from Sejong Project tagged corpus (i.e. a corpus of morphologically analyzed words) of 3.5 million words that includes one of the 49 homonyms. The result of experimenting by assigning the weight of sense rate(prior probability) and the weight of distance concerning the 5 words at the front/behind the homonym to be disambiguated showed better accuracy than disambiguation systems based on existing statistical models by 2.93%,

  • PDF

A Meta-Model for the Storage of XML Schema using Model-Mapping Approach (모델 매핑 접근법을 이용한 XML 스키마 저장 메타모델에 대한 연구)

  • Lim, Hoon-Tae;Lim, Tae-Soo;Hong, Keun-Hee;Kang, Suk-Ho
    • IE interfaces
    • /
    • v.17 no.3
    • /
    • pp.330-337
    • /
    • 2004
  • Since XML (eXtensible Markup Language) was highlighted as an information interchange format, there is an increasing demand for incorporating XML with databases. Most of the approaches are focused on RDB (Relational Databases) because of legacy systems. But these approaches depend on the database system. Countless researches are being focused on DTD (Document Type Definition). However XML Schema is more comprehensive and efficient in many perspectives. We propose a meta-model for XML Schema that is independent of the database. There are three processes to build our meta-model: DOM (Document Object Model) tree analysis, object modeling and storing object into a fixed DB schema using model mapping approach. We propose four mapping rules for object modeling, which conform to the ODMG (Object Data Management Group) 3.0 standard. We expect that the model will be especially useful in building XML-based e-business applications.