• Title/Summary/Keyword: vocabulary data

Search Result 285, Processing Time 0.028 seconds

A Trustworthiness Improving Link Evaluation Technique for LOD considering the Syntactic Properties of RDFS, OWL, and OWL2 (RDFS, OWL, OWL2의 문법특성을 고려한 신뢰향상적 LOD 연결성 평가 기법)

  • Park, Jaeyeong;Sohn, Yonglak
    • Journal of KIISE:Databases
    • /
    • v.41 no.4
    • /
    • pp.226-241
    • /
    • 2014
  • LOD(Linked Open Data) is composed of RDF triples which are based on ontologies. They are identified, linked, and accessed under the principles of linked data. Publications of LOD data sets lead to the extension of LOD cloud and ultimately progress to the web of data. However, if ontologically the same things in different LOD data sets are identified by different URIs, it is difficult to figure out their sameness and to provide trustworthy links among them. To solve this problem, we suggest a Trustworthiness Improving Link Evaluation, TILE for short, technique. TILE evaluates links in 4 steps. Step 1 is to consider the inference property of syntactic elements in LOD data set and then generate RDF triples which have existed implicitly. In Step 2, TILE appoints predicates, compares their objects in triples, and then evaluates links between the subjects in the triples. In Step 3, TILE evaluates the predicates' syntactic property at the standpoints of subject description and vocabulary definition and compensates the evaluation results of Step 2. The syntactic elements considered by TILE contain RDFS, OWL, OWL2 which are recommended by W3C. Finally, TILE makes the publisher of LOD data set review the evaluation results and then decide whether to re-evaluate or finalize the links. This leads the publishers' responsibility to be reflected in the trustworthiness of links among the data published.

The future of bioinformntics

  • Gribskov, Michael
    • Proceedings of the Korean Society for Bioinformatics Conference
    • /
    • 2003.10a
    • /
    • pp.1-1
    • /
    • 2003
  • It is clear that computers will play a key role in the biology of the future. Even now, it is virtually impossible to keep track of the key proteins, their names and associated gene names, physical constants(e.g. binding constants, reaction constants, etc.), and hewn physical and genetic interactions without computational assistance. In this sense, computers act as an auxiliary brain, allowing one to keep track of thousands of complex molecules and their interactions. With the advent of gene expression array technology, many experiments are simply impossible without this computer assistance. In the future, as we seek to integrate the reductionist description of life provided by genomic sequencing into complex and sophisticated models of living systems, computers will play an increasingly important role in both analyzing data and generating experimentally testable hypotheses. The future of bioinformatics is thus being driven by potent technological and scientific forces. On the technological side, new experimental technologies such as microarrays, protein arrays, high-throughput expression and three-dimensional structure determination prove rapidly increasing amounts of detailed experimental information on a genomic scale. On the computational side, faster computers, ubiquitous computing systems, high-speed networks provide a powerful but rapidly changing environment of potentially immense power. The challenges we face are enormous: How do we create stable data resources when both the science and computational technology change rapidly? How do integrate and synthesize information from many disparate subdisciplines, each with their own vocabulary and viewpoint? How do we 'liberate' the scientific literature so that it can be incorporated into electronic resources? How do we take advantage of advances in computing and networking to build the international infrastructure needed to support a complete understanding of biological systems. The seeds to the solutions of these problems exist, at least partially, today. These solutions emphasize ubiquitous high-speed computation, database interoperation, federation, and integration, and the development of research networks that capture scientific knowledge rather than just the ABCs of genomic sequence. 1 will discuss a number of these solutions, with examples from existing resources, as well as area where solutions do not currently exist with a view to defining what bioinformatics and biology will look like in the future.

  • PDF

A Study on the Thesaurus-based Ontology System for the Semantic Web (시소러스를 기반으로 한 온톨로지 시스템 구현에 관한 연구)

  • Jeong, Do-Heon;Kim, Tae-Su
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.3
    • /
    • pp.155-175
    • /
    • 2003
  • The purpose of the study was to construct a system based on the semantic web environment's ontology by utilizing the ontology schema derived from the facet-type Art and Architecture Thesaurus(AAT). The aforementioned ontology schema is based on the Web Ontology Language(OWL), which is being widely considered the standard ontology language for the W3C-centered semantic web environment. Also, the concepts were limited to terms within AAT'S Furniture Facet, and the system was tested using the Chair concept, which is a lower-level facet that has a diverse conceptual relationship and broad vocabulary base. The ontology system is capable of searching for concepts, while controlling the search results by always providing a 'Preferred term' for synonymous terms. In addition, the system provides the user with first, a relationship between the terms centered around the inquiry, and second, related terms along with their classification properties. Also, the system is presented as and application example of the ontology system that constructs a information system that intakes an Instance value and reproduces it into a RDF file. During this process, utilization of multiple ontologies was introduced, and the stored Instance value's meta-data elements were used.

Utilizing Local Bilingual Embeddings on Korean-English Law Data (한국어-영어 법률 말뭉치의 로컬 이중 언어 임베딩)

  • Choi, Soon-Young;Matteson, Andrew Stuart;Lim, Heui-Seok
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.10
    • /
    • pp.45-53
    • /
    • 2018
  • Recently, studies about bilingual word embedding have been gaining much attention. However, bilingual word embedding with Korean is not actively pursued due to the difficulty in obtaining a sizable, high quality corpus. Local embeddings that can be applied to specific domains are relatively rare. Additionally, multi-word vocabulary is problematic due to the lack of one-to-one word-level correspondence in translation pairs. In this paper, we crawl 868,163 paragraphs from a Korean-English law corpus and propose three mapping strategies for word embedding. These strategies address the aforementioned issues including multi-word translation and improve translation pair quality on paragraph-aligned data. We demonstrate a twofold increase in translation pair quality compared to the global bilingual word embedding baseline.

Ontology-based u-Healthcare System for Patient-centric Service (환자중심서비스를 위한 온톨로지 기반의 u-Healthcare 시스템)

  • Jung, Yong Gyu;Lee, Jeong Chan;Jang, Eun Ji
    • Journal of Service Research and Studies
    • /
    • v.2 no.2
    • /
    • pp.45-51
    • /
    • 2012
  • U-healthcare is real-time monitoring of personal biometric information using by portable devices, home network and information and communication technology based healthcare systems, and fused together automatically to overcome the constraints of time and space are connected with hospitals and doctors. As u-healthcare gives health service in anytime and anywhere, it becomes to be a new type of medical services in patients management and disease prevention. In this paper, recent changes in prevention-oriented care is analyzed in becoming early response for Healthcare Information System by requirements analysis for technology development trend. According to the healthcare system, PACS, OCS, EMR and emergency medical system, U-healthcare is presenting the design of a patient-centered integrated client system. As the relationship between the meaning of the terms is used in the ontology, information models in the system is providing a common vocabulary with various levels of formality. In this paper, we propose an ontology-based system for patient-centered services, including the concept of clustering to clustering the data to define the relationship between these ontologies for more systematic data.

  • PDF

Integrated Semantic Querying on Distributed Bioinformatics Databases Based on GO (분산 생물정보 DB 에 대한 GO 기반의 통합 시맨틱 질의 기법)

  • Park Hyoung-Woo;Jung Jun-Won;Kim Hyoung-Joo
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.12 no.4
    • /
    • pp.219-228
    • /
    • 2006
  • Many biomedical research groups have been trying to share their outputs to increase the efficiency of research. As part of their efforts, a common ontology named Gene Ontology(GO), which comprises controlled vocabulary for the functions of genes, was built. However, data from many research groups are distributed and most systems don't support integrated semantic queries on them. Furthermore, the semantics of the associations between concepts from external classification systems and GO are still not clarified, which makes integrated semantic query infeasible. In this paper we present an ontology matching and integration system, called AutoGOA, which first resolves the semantics of the associations between concepts semi-automatically, and then constructs integrated ontology containing concepts from GO and external classification systems. Also we describe a web-based application, named GOGuide II, which allows the user to browse, query and visualize integrated data.

Hybrid Schema Matching (HSM): Schema Matching Algorithm for Integrating Geographic Information (Hybrid Schema Matching (HSM): 지리정보 통합을 위한 하이브리드 스키마 매칭 알고리즘)

  • Lee, Jiyoon;Lee, Sukhoon;Kim, Jangwon;Jeong, Dongwon;Baik, Doo-Kwon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.3
    • /
    • pp.173-186
    • /
    • 2013
  • Web-based map services provide various geographic information that users want to get by continuous updating of data. Those map services provide different information for a geographic object respectively. It causes several problems, and most of all various information cannot be integrated and provided. To resolve the problem, this paper proposes a system which can integrate diverse geographic information and provide users rich geographic information. In this paper, a hybrid schema matching (HSM) algorithm is proposed and the algorithm is a mixture of the adapter-based semantic processing method, static semantic management-based approach, and dynamic semantic management-based approach. A comparative evaluation is described to show effectiveness of the proposed algorithm. The proposed algorithm in this paper improves the accuracy of schema matching because of registration and management of schemas of new semantic information. The proposal enables vocabulary-based schema matching using various schemas, and it thus also supports high usability. Finally, the proposed algorithm is cost-effective by providing the progressive extension of relationships between schema meanings.

A Study on the Online Service of Cultural Heritage Contents (문화유산 콘텐츠 온라인 서비스에 관한 연구)

  • Park, Ok Nam
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.19 no.1
    • /
    • pp.195-224
    • /
    • 2019
  • Online service has been emphasized in various studies for content uses and diffusion of cultural heritage domain. This study purports to investigate the status of contents organization and information services for online cultural heritage services and to suggest improvement directions. This study conducted case studies and expert interviews based on contents, search systems, additional services, and expansion services. It also suggested an integrated information retrieval service for cultural heritage contents as well as the provision of high-quality content and various types of contents. The flexibility of the search function through the content hierarchy, the expansion of access points through the construction of controlled vocabulary, and authority data were also focused. As an additional service, the study proposed a curation-based, user-customized service, data sets open and share, and user participation.

Comparative Analysis of Fashion Characteristics on the Cover of Domestic Licensed Fashion Magazines - Focused on ELLE, VOGUE, W - (국내 라이선스 패션잡지 표지에 나타난 패션특성의 비교분석 - ELLE, VOGUE, W를 중심으로 -)

  • Lee, Hyunji;Lee, Kyunghee
    • Fashion & Textile Research Journal
    • /
    • v.21 no.1
    • /
    • pp.1-12
    • /
    • 2019
  • The purpose of this study is to examine the fashion characteristics of fashion magazine cover by comparing and analyzing the formative characteristics of fashion, visual design characteristics and illustration vocabulary on the cover of 3 fashion magazines. The data analysis criteria consisted of the formative elements of fashion (fashion design element, fashion coordination element) and visual design element (color, illustration lexical layout, model photograph type). Data analysis methods were statistical analysis, stepwise lexical analysis, and content analysis. The results of the study are as follows. First, the formative characteristics of fashion on the cover of fashion magazines show that ELLE is a feminine and elegant characteristics, VOGUE is a modern, chic and mannish characteristics, and W is avant-garde and neutral characteristics. Second, visual design characteristics on the cover of fashion magazines, ELLE and VOGUE use modern and simple modern sensibility by using monotonous background color and background color number, and W showed original image characteristic by using various colors. Third, as a result of the illustration lexical analysis on the cover of fashion magazines, 4 core keywords of trend, star, event, and life appeared in 3 magazines in common. Elle differentiates by innovation, Vogue by discrimination, W by reconstruction.

KorPatELECTRA : A Pre-trained Language Model for Korean Patent Literature to improve performance in the field of natural language processing(Korean Patent ELECTRA)

  • Jang, Ji-Mo;Min, Jae-Ok;Noh, Han-Sung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.15-23
    • /
    • 2022
  • In the field of patents, as NLP(Natural Language Processing) is a challenging task due to the linguistic specificity of patent literature, there is an urgent need to research a language model optimized for Korean patent literature. Recently, in the field of NLP, there have been continuous attempts to establish a pre-trained language model for specific domains to improve performance in various tasks of related fields. Among them, ELECTRA is a pre-trained language model by Google using a new method called RTD(Replaced Token Detection), after BERT, for increasing training efficiency. The purpose of this paper is to propose KorPatELECTRA pre-trained on a large amount of Korean patent literature data. In addition, optimal pre-training was conducted by preprocessing the training corpus according to the characteristics of the patent literature and applying patent vocabulary and tokenizer. In order to confirm the performance, KorPatELECTRA was tested for NER(Named Entity Recognition), MRC(Machine Reading Comprehension), and patent classification tasks using actual patent data, and the most excellent performance was verified in all the three tasks compared to comparative general-purpose language models.