• Title/Summary/Keyword: Global Semantic Set

Search Result 6, Processing Time 0.022 seconds

An Algorithm for Ontology Merging and Alignment using Local and Global Semantic Set (지역 및 전역 의미집합을 이용한 온톨로지 병합 및 정렬 알고리즘)

  • 김재홍;이상조
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.4
    • /
    • pp.23-30
    • /
    • 2004
  • Ontologies play an important role in the Semantic Web by providing well-defined meaning to ontology consumers. But as the ontologies are authored in a bottom-up distributed mimer, a large number of overlapping ontologies are created and used for the similar domains. Ontology sharing and reuse have become a distinguished topic, and ontology merging and alignment are the solutions for the problem. Ontology merging and alignment algorithms previously proposed detect conflicts between concepts by making use of only local syntactic information of concept names. And they depend only on a semi-automatic approach, which makes ontology engineers tedious. Consequently, the quality of merging and alignment tends to be unsatisfying. To remedy the defects of the previous algorithms, we propose a new algorithm for ontology merging and alignment which uses local and global semantic set of a concept. We evaluated our algorithm with several pairs of ontologies written in OWL, and achieved around 91% of precision in merging and alignment. We expect that, with the widespread use of web ontology, the need for ontology sharing and reuse ill become higher, and our proposed algorithm can significantly reduce the time required for ontology development. And also, our algorithm can easily be applied to various fields such as ontology mapping where semantic information exchange is a requirement.

The State-of-the-art of Discovering New Suppliers to Build a Supply Chain (공급망 형성을 위한 협업기업 발굴방법의 최신 동향 분석)

  • Kim, Kyung-Doc;Jo, Bo-Ram;Shin, Moon-Soo;Ryu, Kwang-Yeol;Cho, Hyun-Bo
    • IE interfaces
    • /
    • v.25 no.1
    • /
    • pp.21-30
    • /
    • 2012
  • In the past, buyers and suppliers met each other to find common interests off-line in exhibitions and conferences, or through personal connection. These activities were time consuming and costly. With the advent of information era, these activities moved to online market places, where buyers search for suppliers with a set of keywords that are believed to be representative of their requirements. Its fundamental assumption is that all the potential candidates are registered in a certain database. However, recently buyers want to diversify suppliers due to needs of cost competitiveness or frequent new product development. To this end, instead of choosing suppliers from the supplier pool, discovering suppliers from all over the world should be emphasized. In order to enable buyers to describe their requirements and suppliers to capture their manufacturing capabilities via online market places, the semantic differences of terms between buyers and suppliers should be resolved. The paper summarizes various supplier discovery frameworks and prototype systems, which can be employed to expose domestic small-medium enterprises into global buyers in the near future.

Applying Rescorla-Wagner Model to Multi-Agent Web Service and Performance Evaluation for Need Awaring Reminder Service (Rescorla-Wagner 모형을 활용한 다중 에이전트 웹서비스 기반 욕구인지 상기 서비스 구축 및 성능분석)

  • Kwon, Oh-Byung;Choi, Keon-Ho;Choi, Sung-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.11 no.3
    • /
    • pp.1-23
    • /
    • 2005
  • Personalized reminder systems have to identify the user's current needs dynamically and proactively based on the user's current context. However, need identification methodologies and their feasible architectures for personalized reminder systems have been so far rare. Hence, this paper aims to propose a proactive need awaring mechanism by applying agent, semantic web technologies and RFID-based context subsystem for a personalized reminder system which is one of the supporting systems for a robust ubiquitous service support environment. RescorlaWagner model is adopted as an underlying need awaring theory. We have created a prototype system called NAMA(Need Aware Multi-Agent)-RFID, to demonstrate the feasibility of the methodology and of the mobile settings framework that we propose in this paper. NAMA considers the context, user profile with preferences, and information about currently available services, to discover the user's current needs and then link the user to a set of services, which are implemented as web services. Moreover, to test if the proposed system works in terms of scalability, a simulation was performed and the results are described.

  • PDF

Knowledge Representation and Reasoning using Metalogic in a Cooperative Multiagent Environment

  • Kim, Koono
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.7
    • /
    • pp.35-48
    • /
    • 2022
  • In this study, it propose a proof theory method for expressing and reasoning knowledge in a multiagent environment. Since this method determines logical results in a mechanical way, it has developed as a core field from early AI research. However, since the proposition cannot always be proved in any set of closed sentences, in order for the logical result to be determinable, the range of expression is limited to the sentence in the form of a clause. In addition, the resolution principle, a simple and strong reasoning rule applicable only to clause-type sentences, is applied. Also, since the proof theory can be expressed as a meta predicate, it can be extended to the metalogic of the proof theory. Metalogic can be superior in terms of practicality and efficiency based on improved expressive power over epistemic logic of model theory. To prove this, the semantic method of epistemic logic and the metalogic method of proof theory are applied to the Muddy Children problem, respectively. As a result, it prove that the method of expressing and reasoning knowledge and common knowledge using metalogic in a cooperative multiagent environment is more efficient.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

A Program Transformational Approach for Rule-Based Hangul Automatic Programming (규칙기반 한글 자동 프로그램을 위한 프로그램 변형기법)

  • Hong, Seong-Su;Lee, Sang-Rak;Sim, Jae-Hong
    • The Transactions of the Korea Information Processing Society
    • /
    • v.1 no.1
    • /
    • pp.114-128
    • /
    • 1994
  • It is very difficult for a nonprofessional programmer in Koera to write a program with very High Level Language such as, V,REFINE, GIST, and SETL, because the semantic primitives of these languages are based on predicate calculus, set, mapping, or testricted natural language. And it takes time to be familiar with these language. In this paper, we suggest a method to reduce such difficulties by programming with the declarative, procedural constructs, and aggregate constructs. And we design and implement an experimental knowledge-based automatic programming system. called HAPS(Hangul Automatic Program System). HAPS, whose input is specification such as Hangul abstract algorithm and datatype or Hangul procedural constructs, and whose output is C program. The method of operation is based on rule-based and program transformation technique, and the problem transformation technique. The problem area is general problem. The control structure of HAPS accepts the program specification, transforms this specification according to the proper rule in the rule-base, and stores the transformed program specification on the global data base. HAPS repeats these procedures until the target C program is fully constructed.

  • PDF