• Title/Summary/Keyword: information mapping

Search Result 3,156, Processing Time 0.029 seconds

Skeleton Code Generation for Transforming an XML Document with DTD using Metadata Interface (메타데이터 인터페이스를 이용한 DTD 기반 XML 문서 변환기의 골격 원시 코드 생성)

  • Choe Gui-Ja;Nam Young-Kwang
    • The KIPS Transactions:PartD
    • /
    • v.13D no.4 s.107
    • /
    • pp.549-556
    • /
    • 2006
  • In this paper, we propose a system for generating skeleton programs for directly transforming an XML document to another document, whose structure is defined in the target DTD with GUI environment. With the generated code, the users can easily update or insert their own codes into the program so that they can convert the document as the way that they want and can be connected with other classes or library files. Since most of the currently available code generation systems or methods for transforming XML documents use XSLT or XQuery, it is very difficult or impossible for users to manipulate the source code for further updates or refinements. As the generated code in this paper reveals the code along the XPaths of the target DTD, the result code is quite readable. The code generating procedure is simple; once the user maps the related elements represented as trees in the GUI interface, the source document is transformed into the target document and its corresponding Java source program is generated, where DTD is given or extracted from XML documents automatically by parsing it. The mapping is classified 1:1, 1:N, and N:1, according to the structure and semantics of elements of the DTD. The functions for changing the structure of elements designated by the user are amalgamated into the metadata interface. A real world example of transforming articles written in XML file into a bibliographical XML document is shown with the transformed result and its code.

Study Service Ontology Design Scheme Using UML and OCL (UML 및 OCL을 이용한 서비스 온톨로지 설계 방안에 관한 연구)

  • Lee Yun-Su;Chung In-Jeoung
    • The KIPS Transactions:PartD
    • /
    • v.12D no.4 s.100
    • /
    • pp.627-636
    • /
    • 2005
  • The Intelligent Web Service is proposed for the purpose of automatic discovery, invocation, composition, inter-operation, execution monitoring and recovery web service through the Semantic Web and the Agent technology. To accomplish this Intelligent Web Service, the Ontology is a necessity for reasoning and processing the knowledge by the computer. However, creating service ontology, for the intelligent web service, has two problems not only consuming a lot of time and cost depended on heuristic of service developer, but also being hard to be mapping completely between service and service ontology. Moreover, the markup language to describe the service ontology is currently hard to be learned by the service developer In a short time. This paper proposes the efficient way of designing and creating the service ontology using MDA methodology. This proposed solution reuses the creating model in terms of desiEninE and constructing Web Service Model using UML based on MDA. After converting the Platform-Independent Web Service Model to the dependent model of OWL-S which is a Service Ontology description language, it converts to OWL-S Service Ontology using XMI. This proposed solution has profits, oneis able to be easily constructed the Service Ontology by Service Developers, the other is enable to be created the both service and Service Ontology from one model. Moreover, it can be effective to reduce the time and cost as creating Service Ontology automatically from a model, and calmly dealt with a change of outer environment like as the platform change. This paper cites an instance for the validity of designing Web Service model and creating the Service Ontology, and validates whether the created Service Ontology is valid or not.

Automatic Merging of Distributed Topic Maps based on T-MERGE Operator (T-MERGE 연산자에 기반한 분산 토픽맵의 자동 통합)

  • Kim Jung-Min;Shin Hyo-Pil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.787-801
    • /
    • 2006
  • Ontology merging describes the process of integrating two ontologies into a new ontology. How this is done best is a subject of ongoing research in the Semantic Web, Data Integration, Knowledge Management System, and other ontology-related application systems. Earlier research on ontology merging, however, has studied for developing effective ontology matching approaches but missed analyzing and solving methods of problems of merging two ontologies given correspondences between them. In this paper, we propose a specific ontology merging process and a generic operator, T-MERGE, for integrating two source ontologies into a new ontology. Also, we define a taxonomy of merging conflicts which is derived from differing representations between input ontologies and a method for detecting and resolving them. Our T-MERGE operator encapsulates the process of detection and resolution of conflicts and merging two entities based on given correspondences between them. We define a data structure, MergeLog, for logging the execution of T-MERGE operator. MergeLog is used to inform detailed results of execution of merging to users or recover errors. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Naver philosophy dictionary as input ontologies. Our experiments show that the automatic merging module compared with manual merging by a expert has advantages in terms of time and effort.

Buffer Cache Management for Low Power Consumption (저전력을 위한 버퍼 캐쉬 관리 기법)

  • Lee, Min;Seo, Eui-Seong;Lee, Joon-Won
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.6
    • /
    • pp.293-303
    • /
    • 2008
  • As the computing environment moves to the wireless and handheld system, the power efficiency is getting more important. That is the case especially in the embedded hand-held system and the power consumed by the memory system takes the second largest portion in overall. To save energy consumed in the memory system we can utilize low power mode of SDRAM. In the case of RDRAM, nap mode consumes less than 5% of the power consumed in active or standby mode. However hardware controller itself can't use this facility efficiently unless the operating system cooperates. In this paper we focus on how to minimize the number of active units of SDRAM. The operating system allocates its physical pages so that only a few units of SDRAM need to be activated and the unnecessary SDRAM can be put into nap mode. This work can be considered as a generalized and system-wide version of PAVM(Power-Aware Virtual Memory) research. We take all the physical memory into account, especially buffer cache, which takes an half of total memory usage on average. Because of the portion of buffer cache and its importance, PAVM approach cannot be robust without taking the buffer cache into account. In this paper, we analyze the RAM usage and propose power-aware page allocation policy. Especially the pages mapped into the process' address space and the buffer cache pages are considered. The relationship and interactions of these two kinds of pages are analyzed and exploited for energy saving.

Development Life Cycle-Based Association Analysis of Requirements for Risk Management of Medical Device Software (의료기기 소프트웨어 위험관리를 위한 개발생명주기 기반 위험관리 요구사항 연관성 분석)

  • Kim, DongYeop;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.12
    • /
    • pp.543-548
    • /
    • 2017
  • In recent years, the importance of the safety of medical device software has been emphasized because of the function and role of the software among components of the medical device, and because the operation of the medical device software is directly related to the life and safety of the user. To this end, various standards have been set up that provide activities that can effectively ensure the safety of medical devices and provide their respective requirements. The activities that standards provide to ensure the safety of medical device software are largely divided into the development life cycle of medical device software and the risk management process. These two activities should be concurrent with the development process, but there is a limitation that the risk management requirements to be performed at each stage of the medical device software development life cycle are not classified. As a result, developers must analyze the association of standards directly to develop risk management activities during the development of medical devices. Therefore, in this paper, we analyze the relationship between medical device software development life cycle and risk management process, and extract risk management requirement items. It enables efficient and systematic risk management during the development of medical device software by mapping the extracted risk management requirement items to the development life cycle based on the analyzed associations.

On Mapping Growing Degree-Days (GDD) from Monthly Digital Climatic Surfaces for South Korea (월별 전자기후도를 이용한 생장도일 분포도 제작에 관하여)

  • Kim, Jin-Hee;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.10 no.1
    • /
    • pp.1-8
    • /
    • 2008
  • The concept of growing degree-days (GDD) is widely accepted as a tool to relate plant growth, development, and maturity to temperature. Information on GDD can be used to predict the yield and quality of several crops, flowering date of fruit trees, and insect activity related to agriculture and forestry. When GDD is expressed on a spatial basis, it helps identify the limits of geographical areas suitable for production of various crops and to evaluate areas agriculturally suitable for new or nonnative plants. The national digital climate maps (NDCM, the fine resolution, gridded climate data for climatological normal years) are not provided on a daily basis but on a monthly basis, prohibiting GDD calculation. We applied a widely used GDD estimation method based on monthly data to a part of the NDCM (for Hapcheon County) to produce the spatial GDD data for each month with three different base temperatures (0, 5, and $10^{\circ}C$). Synthetically generated daily temperatures from the NCDM were used to calculate GDD over the same area and the deviations were calculated for each month. The monthly-data based GDD was close to the reference GDD using daily data only for the case of base temperature $0^{\circ}C$. There was a consistent overestimation in GDD with other base temperatures. Hence, we estimated spatial GDD with base temperature $0^{\circ}C$ over the entire nation for the current (1971-2000, observed) and three future (2011-2040, 2041-2070, and 2071-2100, predicted) climatological normal years. Our estimation indicates that the annual GDD in Korea may increase by 38% in 2071-2100 compared with that in 1971-2000.

Landslide Susceptibility Mapping and Verification Using the GIS and Bayesian Probability Model in Boun (지리정보시스템(GIS) 및 베이지안 확률 기법을 이용한 보은지역의 산사태 취약성도 작성 및 검증)

  • Choi, Jae-Won;Lee, Sa-Ro;Min, Kyung-Duk;Woo, Ik
    • Economic and Environmental Geology
    • /
    • v.37 no.2
    • /
    • pp.207-223
    • /
    • 2004
  • The purpose of this study is to reveal spatial relationships between landslide and geospatial data set, to map the landslide susceptibility using the relationship and to verify the landslide susceptibility using the landslide occurrence data in Boun area in 1998. Landslide locations were detected from aerial photography and field survey, and then topography, soil, forest, and land cover data set were constructed as a spatial database using GIS. Various spatial parameters were used as the landslide occurrence factors. They are slope, aspect, curvature and type of topography, texture, material, drainage and effective thickness of soil. type, age, diameter and density of wood, lithology, distance from lineament and land cover. To calculate the relationship between landslides and geospatial database, Bayesian probability methods, weight of evidence. were applied and the contrast value that is >$W^{+}$->$W^{-}$ were calculated. The landslide susceptibility index was calculated by summation of the contrast value and the landslide susceptibility maps were generated using the index. The landslide susceptibility map can be used to reduce associated hazards, and to plan land cover and construction.

Accuracy of Parcel Boundary Demarcation in Agricultural Area Using UAV-Photogrammetry (무인 항공사진측량에 의한 농경지 필지 경계설정 정확도)

  • Sung, Sang Min;Lee, Jae One
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.1
    • /
    • pp.53-62
    • /
    • 2016
  • In recent years, UAV Photogrammetry based on an ultra-light UAS(Unmanned Aerial System) installed with a low-cost compact navigation device and a camera has attracted great attention through fast and accurate acquirement of geo-spatial data. In particular, UAV Photogrammetry do gradually replace the traditional aerial photogrammetry because it is able to produce DEMs(Digital Elevation Models) and Orthophotos rapidly owing to large amounts of high resolution image collection by a low-cost camera and image processing software combined with computer vision technique. With these advantages, UAV-Photogrammetry has therefore been applying to a large scale mapping and cadastral surveying that require accurate position information. This paper presents experimental results of an accuracy performance test with images of 4cm GSD from a fixed wing UAS to demarcate parcel boundaries in agricultural area. Consequently, the accuracy of boundary point extracted from UAS orthoimage has shown less than 8cm compared with that of terrestrial cadastral surveying. This means that UAV images satisfy the tolerance limit of distance error in cadastral surveying for the scale of 1: 500. And also, the area deviation is negligible small, about 0.2%(3.3m2), against true area of 1,969m2 by cadastral surveying. UAV-Photogrammetry is therefore as a promising technology to demarcate parcel boundaries.

Building a Robust 3D Statistical Shape Model of the Mandible (견고한 3차원 하악골 통계 형상 모델 생성)

  • Yoo, Ji-Hyun;Hong, Helen
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.2
    • /
    • pp.118-127
    • /
    • 2008
  • In this paper, we propose a method for construction of robust 3D statistical shape model in the mandible CT datasets. Our method consists of following four steps. First, we decompose a 3D input shape Into patches. Second, to generate a corresponding shape of a floating shape, all shapes in the training set are parameterized onto a disk similar to the patch topology. Third, we generate the corresponding shape by one-to-one mapping between the reference and the floating shapes. We solve the problem failed to generate the corresponding points near the patch boundary Finally, the corresponding shapes are aligned with the reference shape. Then statistical shape model is generated by principle component analysis. To evaluate the accuracy of our 3D statistical shape model of the mandible, we perform visual inspection and similarity measure using average distance difference between the floating and the corresponding shapes. In addition, we measure the compactness of statistical shape model using the modes of variation. Experimental results show that our 3D statistical shape model generated by the mandible CT datasets with various characteristics has a high similarity between the floating and corresponding shapes and is represented by the small number of modes.

A Content-Aware toad Balancing Technique Based on Histogram Transformation in a Cluster Web Server (클러스터 웹 서버 상에서 히스토그램 변환을 이용한 내용 기반 부하 분산 기법)

  • Hong Gi Ho;Kwon Chun Ja;Choi Hwang Kyu
    • Journal of Internet Computing and Services
    • /
    • v.6 no.2
    • /
    • pp.69-84
    • /
    • 2005
  • As the Internet users are increasing rapidly, a cluster web server system is attracted by many researchers and Internet service providers. The cluster web server has been developed to efficiently support a larger number of users as well as to provide high scalable and available system. In order to provide the high performance in the cluster web server, efficient load distribution is important, and recently many content-aware request distribution techniques have been proposed. In this paper, we propose a new content-aware load balancing technique that can evenly distribute the workload to each node in the cluster web server. The proposed technique is based on the hash histogram transformation, in which each URL entry of the web log file is hashed, and the access frequency and file size are accumulated as a histogram. Each user request is assigned into a node by mapping of (hashed value-server node) in the histogram transformation. In the proposed technique, the histogram is updated periodically and then the even distribution of user requests can be maintained continuously. In addition to the load balancing, our technique can exploit the cache effect to improve the performance. The simulation results show that the performance of our technique is quite better than that of the traditional round-robin method and we can improve the performance more than $10\%$ compared with the existing workload-aware load balancing(WARD) method.

  • PDF