• Title/Summary/Keyword: Semantic errors

Search Result 75, Processing Time 0.017 seconds

Necessity of Standardization and Standardized Method for Substances Accounting of Environmental Liability Insurance (환경책임보험 배출 물질 정산의 표준화 필요성 및 산출방법 표준화)

  • Park, Myeongnam;Kim, Chang-wan;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.5
    • /
    • pp.1-17
    • /
    • 2018
  • Related incidents and accidents are frequent after 2000 years, such as the outbreak of the Taian peninsula crude oil spillage and Gumi hydrofluoric acid leakage accident. In the wake of such environmental pollution accidents, Consensus has been formed to enact legislation on liability for the compensation of environmental pollution in 2014 and the rescue, and has been in force since January 2016. Therefore, in the domestic insurance industry, the introduced environmental liability insurance system needs to be managed through the standardization formula of a new insurance model for managing the environmental risk. This study has been carried out by the emergence of a safe insurance model with a risky nature of the risk type, which is one of the services of the knowledge base. The verification of the six assurance media on the occurrence of environmental pollution such as chemical, waste, marine, soil, etc. is expressed through semantic interoperability through this possible ontology. The insurance model was designed and presented by deducing the relationship between the amount of money and the amount of money that was written in the area of existing expertise, In order to exclude the possible consequences, the concept of abstract is conceptualized in the form of a customer, and a plan for the future development of an ontology-based decision support system is proposed to reduce the cost and resources consumed every year. It is expected that standardization of the verification standard of the mass of mass will minimize errors and reduce the time and resources required for verification.

Development and Application of Development Principles for Clinical Information Model (임상정보모델 개발원칙의 개발과 적용)

  • Ahn, Sun-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.8
    • /
    • pp.2899-2905
    • /
    • 2010
  • To be applicable under electronic health record system in order to ensure semantic interoperability of clinical information, the development principle for clinical information model to reflect objective and function is required. The aim of this study is to develop the development principles for clinical information model and evaluate the Clinical Contents Model. In order to develop the principle, from November 2008 to March 2009, the surveys about 1) definition, 2) function and 3) quality criteria were done, and 4) the components of advanced model were analyzed. The study was processed in 3 levels. Firstly in the development level, key words and key words-paragraph were driven from the references, and the principles were drawn based on the clinical or functional importance and frequency. In the application level, the 3 experts of clinical information model assessed 30 Clinical Contents Models by applying it. In the feedback level, the Clinical Contents Model in which errors were found was modified. As the results, 18 development principles were derived with 3 categories which were structure, process and contents. The Clinical Contents Models were assessed with the principles, and the 17 models were found that they did not follow it. During the feedback process, the necessity of the advanced education of the principle and the establishment of the regular quality improvement strategy to use it is raised. The proposed development principle supports the consistent model-development between clinical information model developers, and could be used as evaluation criteria.

Automatic Merging of Distributed Topic Maps based on T-MERGE Operator (T-MERGE 연산자에 기반한 분산 토픽맵의 자동 통합)

  • Kim Jung-Min;Shin Hyo-Pil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.9
    • /
    • pp.787-801
    • /
    • 2006
  • Ontology merging describes the process of integrating two ontologies into a new ontology. How this is done best is a subject of ongoing research in the Semantic Web, Data Integration, Knowledge Management System, and other ontology-related application systems. Earlier research on ontology merging, however, has studied for developing effective ontology matching approaches but missed analyzing and solving methods of problems of merging two ontologies given correspondences between them. In this paper, we propose a specific ontology merging process and a generic operator, T-MERGE, for integrating two source ontologies into a new ontology. Also, we define a taxonomy of merging conflicts which is derived from differing representations between input ontologies and a method for detecting and resolving them. Our T-MERGE operator encapsulates the process of detection and resolution of conflicts and merging two entities based on given correspondences between them. We define a data structure, MergeLog, for logging the execution of T-MERGE operator. MergeLog is used to inform detailed results of execution of merging to users or recover errors. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Naver philosophy dictionary as input ontologies. Our experiments show that the automatic merging module compared with manual merging by a expert has advantages in terms of time and effort.

Finding Influential Users in the SNS Using Interaction Concept : Focusing on the Blogosphere with Continuous Referencing Relationships (상호작용성에 의한 SNS 영향유저 선정에 관한 연구 : 연속적인 참조관계가 있는 블로고스피어를 중심으로)

  • Park, Hyunjung;Rho, Sangkyu
    • The Journal of Society for e-Business Studies
    • /
    • v.17 no.4
    • /
    • pp.69-93
    • /
    • 2012
  • Various influence-related relationships in Social Network Services (SNS) among users, posts, and user-and-post, can be expressed using links. The current research evaluates the influence of specific users or posts by analyzing the link structure of relevant social network graphs to identify influential users. We applied the concept of mutual interactions proposed for ranking semantic web resources, rather than the voting notion of Page Rank or HITS, to blogosphere, one of the early SNS. Through many experiments with network models, where the performance and validity of each alternative approach can be analyzed, we showed the applicability and strengths of our approach. The weight tuning processes for the links of these network models enabled us to control the experiment errors form the link weight differences and compare the implementation easiness of alternatives. An additional example of how to enter the content scores of commercial or spam posts into the graph-based method is suggested on a small network model as well. This research, as a starting point of the study on identifying influential users in SNS, is distinctive from the previous researches in the following points. First, various influence-related properties that are deemed important but are disregarded, such as scraping, commenting, subscribing to RSS feeds, and trusting friends, can be considered simultaneously. Second, the framework reflects the general phenomenon where objects interacting with more influential objects increase their influence. Third, regarding the extent to which a bloggers causes other bloggers to act after him or her as the most important factor of influence, we treated sequential referencing relationships with a viewpoint from that of PageRank or HITS (Hypertext Induced Topic Selection).

Personalized Recommendation System for IPTV using Ontology and K-medoids (IPTV환경에서 온톨로지와 k-medoids기법을 이용한 개인화 시스템)

  • Yun, Byeong-Dae;Kim, Jong-Woo;Cho, Yong-Seok;Kang, Sang-Gil
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.3
    • /
    • pp.147-161
    • /
    • 2010
  • As broadcasting and communication are converged recently, communication is jointed to TV. TV viewing has brought about many changes. The IPTV (Internet Protocol Television) provides information service, movie contents, broadcast, etc. through internet with live programs + VOD (Video on demand) jointed. Using communication network, it becomes an issue of new business. In addition, new technical issues have been created by imaging technology for the service, networking technology without video cuts, security technologies to protect copyright, etc. Through this IPTV network, users can watch their desired programs when they want. However, IPTV has difficulties in search approach, menu approach, or finding programs. Menu approach spends a lot of time in approaching programs desired. Search approach can't be found when title, genre, name of actors, etc. are not known. In addition, inserting letters through remote control have problems. However, the bigger problem is that many times users are not usually ware of the services they use. Thus, to resolve difficulties when selecting VOD service in IPTV, a personalized service is recommended, which enhance users' satisfaction and use your time, efficiently. This paper provides appropriate programs which are fit to individuals not to save time in order to solve IPTV's shortcomings through filtering and recommendation-related system. The proposed recommendation system collects TV program information, the user's preferred program genres and detailed genre, channel, watching program, and information on viewing time based on individual records of watching IPTV. To look for these kinds of similarities, similarities can be compared by using ontology for TV programs. The reason to use these is because the distance of program can be measured by the similarity comparison. TV program ontology we are using is one extracted from TV-Anytime metadata which represents semantic nature. Also, ontology expresses the contents and features in figures. Through world net, vocabulary similarity is determined. All the words described on the programs are expanded into upper and lower classes for word similarity decision. The average of described key words was measured. The criterion of distance calculated ties similar programs through K-medoids dividing method. K-medoids dividing method is a dividing way to divide classified groups into ones with similar characteristics. This K-medoids method sets K-unit representative objects. Here, distance from representative object sets temporary distance and colonize it. Through algorithm, when the initial n-unit objects are tried to be divided into K-units. The optimal object must be found through repeated trials after selecting representative object temporarily. Through this course, similar programs must be colonized. Selecting programs through group analysis, weight should be given to the recommendation. The way to provide weight with recommendation is as the follows. When each group recommends programs, similar programs near representative objects will be recommended to users. The formula to calculate the distance is same as measure similar distance. It will be a basic figure which determines the rankings of recommended programs. Weight is used to calculate the number of watching lists. As the more programs are, the higher weight will be loaded. This is defined as cluster weight. Through this, sub-TV programs which are representative of the groups must be selected. The final TV programs ranks must be determined. However, the group-representative TV programs include errors. Therefore, weights must be added to TV program viewing preference. They must determine the finalranks.Based on this, our customers prefer proposed to recommend contents. So, based on the proposed method this paper suggested, experiment was carried out in controlled environment. Through experiment, the superiority of the proposed method is shown, compared to existing ways.