• Title/Summary/Keyword: 구문 관계 정보

Search Result 244, Processing Time 0.027 seconds

Ranked Web Service Retrieval by Keyword Search (키워드 질의를 이용한 순위화된 웹 서비스 검색 기법)

  • Lee, Kyong-Ha;Lee, Kyu-Chul;Kim, Kyong-Ok
    • The Journal of Society for e-Business Studies
    • /
    • v.13 no.2
    • /
    • pp.213-223
    • /
    • 2008
  • The efficient discovery of services from a large scale collection of services has become an important issue[7, 24]. We studied a syntactic method for Web service discovery, rather than a semantic method. We regarded a service discovery as a retrieval problem on the proprietary XML formats, which were service descriptions in a registry DB. We modeled services and queries as probabilistic values and devised similarity-based retrieval techniques. The benefits of our way are follows. First, our system supports ranked service retrieval by keyword search. Second, we considers both of UDDI data and WSDL definitions of services amid query evaluation time. Last, our technique can be easily implemented on the off-theshelf DBMS and also utilize good features of DBMS maintenance.

  • PDF

A Study on Processing XML Documents (XML 문서 처리에 관한 연구)

  • Kim, Tae Gwon
    • Journal of KIISE
    • /
    • v.43 no.4
    • /
    • pp.489-496
    • /
    • 2016
  • XML can effectively express structured or semi-structured data as well as relational databases. XQuery is a query language for retrieving information for such an XML document. In this paper, an XQuery composer is designed and implemented, with an API provided for XQuery processors, and a proper processor is registered. This composer shows query results immediately processed by the processor. As this composer contains a parser for XQuery, it can compose XQuery effectively using a diverse dialog box designed for XQuery grammar. A dialog box is affiliated with a clause region, which is a region that algebra operates from the parsing tree. It can compose path expressions for an XML document easily as it shows an element tree from DTD graphically. Path expressions are composed automatically by marking elements in the structural hierarchy and by specifying the predicate of an element partially.

A Multi-Strategic Mapping Approach for Distributed Topic Maps (분산 토픽맵의 다중 전략 매핑 기법)

  • Kim Jung-Min;Shin Hyo-phil;Kim Hyoung-Joo
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.1
    • /
    • pp.114-129
    • /
    • 2006
  • Ontology mapping is the task of finding semantic correspondences between two ontologies. In order to improve the effectiveness of ontology mapping, we need to consider the characteristics and constraints of data models used for implementing ontologies. Earlier research on ontology mapping, however, has proven to be inefficient because the approach should transform input ontologies into graphs and take into account all the nodes and edges of the graphs, which ended up requiring a great amount of processing time. In this paper, we propose a multi-strategic mapping approach to find correspondences between ontologies based on the syntactic or semantic characteristics and constraints of the topic maps. Our multi-strategic mapping approach includes a topic name-based mapping, a topic property-based mapping, a hierarchy-based mapping, and an association-based mapping approach. And it also uses a hybrid method in which a combined similarity is derived from the results of individual mapping approaches. In addition, we don't need to generate a cross-pair of all topics from the ontologies because unmatched pairs of topics can be removed by characteristics and constraints of the topic maps. For our experiments, we used oriental philosophy ontologies, western philosophy ontologies, Yahoo western philosophy dictionary, and Yahoo german literature dictionary as input ontologies. Our experiments show that the automatically generated mapping results conform to the outputs generated manually by domain experts, which is very promising for further work.

A Development of the Automatic Predicate-Argument Analyzer for Construction of Semantically Tagged Korean Corpus (한국어 의미 표지 부착 말뭉치 구축을 위한 자동 술어-논항 분석기 개발)

  • Cho, Jung-Hyun;Jung, Hyun-Ki;Kim, Yu-Seop
    • The KIPS Transactions:PartB
    • /
    • v.19B no.1
    • /
    • pp.43-52
    • /
    • 2012
  • Semantic role labeling is the research area analyzing the semantic relationship between elements in a sentence and it is considered as one of the most important semantic analysis research areas in natural language processing, such as word sense disambiguation. However, due to the lack of the relative linguistic resources, Korean semantic role labeling research has not been sufficiently developed. We, in this paper, propose an automatic predicate-argument analyzer to begin constructing the Korean PropBank which has been widely utilized in the semantic role labeling. The analyzer has mainly two components: the semantic lexical dictionary and the automatic predicate-argument extractor. The dictionary has the case frame information of verbs and the extractor is a module to decide the semantic class of the argument for a specific predicate existing in the syntactically annotated corpus. The analyzer developed in this research will help the construction of Korean PropBank and will finally play a big role in Korean semantic role labeling.

Korean Probabilistic Dependency Grammar Induction by morpheme (형태소 단위의 한국어 확률 의존문법 학습)

  • Choi, Seon-Hwa;Park, Hyuk-Ro
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.791-798
    • /
    • 2002
  • In this thesis. we present a new method for inducing a probabilistic dependency grammar (PDG) from text corpus. As words in Korean are composed of a set of more basic morphemes, there exist various dependency relations in a word. So, if the induction process does not take into account of these in-word dependency relations, the accuracy of the resulting grammar nay be poor. In comparison with previous PDG induction methods. the main difference of the proposed method lies in the fact that the method takes into account in-word dependency relations as well as inter-word dependency relations. To access the performance of the proposed method, we conducted an experiment using a manually-tagged corpus of 25,000 sentences which is complied by Korean Advanced Institute of Science and Technology (KAIST). The grammar induction produced 2,349 dependency rules. The parser with these dependency rules shoved 69.77% accuracy in terms of the number of correct dependency relations relative to the total number dependency relations for best-1 parse trees of sample sentences. The result shows that taking into account in-word dependency relations in the course of grammar induction results in a more accurate dependency grammar.

An Automatic Issues Analysis System using Big-data (빅데이터를 이용한 자동 이슈 분석 시스템)

  • Choi, Dongyeol;Ahn, Eungyoung
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.240-247
    • /
    • 2020
  • There have been many efforts to understand the trends of IT environments that have been rapidly changed. In a view point of management, it needs to prepare the social systems in advance by using Big-data these days. This research is for the implementation of Issue Analysis System for the Big-data based on Artificial Intelligence. This paper aims to confirm the possibility of new technology for Big-data processing through the proposed Issue Analysis System using. We propose a technique for semantic reasoning and pattern analysis based on the AI and show the proposed method is feasible to handle the Big-data. We want to verify that the proposed method can be useful in dealing with Big-data by applying latest security issues into the system. The experiments show the potentials for the proposed method to use it as a base technology for dealing with Big-data for various purposes.

A Efficient Debugging Method for Java Programs (자바 프로그램을 위한 효율적인 디버깅 방법)

  • 고훈준;유원희
    • Proceedings of the Korea Society for Industrial Systems Conference
    • /
    • 2002.06a
    • /
    • pp.170-176
    • /
    • 2002
  • Java language is a representative object-oriented language that is used at the various platform and fields. A structure of java language is simpler than traditional procedural-oriented language because of characters of object-oriented language But it is difficult to debug complicated java programs. Debugging has always been a costly part of software development. Syntax errors of java programs is easily found by the current debugging system. But it is difficult to locate logical errors included in java programs. Traditional debugging techniques locating logical errors in java program have been still used with conventional methods that are used at procedural-oriented languages. Unfortunately, these traditional methods are often inadequate for the task of isolating specific program errors. Debugger users may spend considerable time debugging code of program development with sequential methods according as program size is large and is complicated. It is important to easily locate errors included in java program in the software development. In this paper, we apply algorithmic debugging method that debugger user can easily debug programs to java program. This method executes a program and makes an execution tree from calling relation of functions. And it locates errors at the execution tree. So, Algorithmic debugging method can reduce the number of debugging than conventional sequential method.

  • PDF

Context-based Web Application Design (컨텍스트 기반의 웹 애플리케이션 설계 방법론)

  • Park, Jin-Soo
    • The Journal of Society for e-Business Studies
    • /
    • v.12 no.2
    • /
    • pp.111-132
    • /
    • 2007
  • Developing and managing Web applications are more complex than ever because of their growing functionalities, advancing Web technologies, increasing demands for integration with legacy applications, and changing content and structure. All these factors call for a more inclusive and comprehensive Web application design method. In response, we propose a context-based Web application design methodology that is based on several classification schemes including a Webpage classification, which is useful for identifying the information delivery mechanism and its relevant Web technology; a link classification, which reflects the semantics of various associations between pages; and a software component classification, which is helpful for pinpointing the roles of various components in the course of design. The proposed methodology also incorporates a unique Web application model comprised of a set of information clusters called compendia, each of which consists of a theme, its contextual pages, links, and components. This view is useful for modular design as well as for management of ever-changing content and structure of a Web application. The proposed methodology brings together all the three classification schemes and the Web application model to arrive at a set of both semantically cohesive and syntactically loose-coupled design artifacts.

  • PDF

A Method for Detection and Correction of Pseudo-Semantic Errors Due to Typographical Errors (철자오류에 기인한 가의미 오류의 검출 및 교정 방법)

  • Kim, Dong-Joo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.10
    • /
    • pp.173-182
    • /
    • 2013
  • Typographical mistakes made in the writing process of drafts of electronic documents are more common than any other type of errors. The majority of these errors caused by mistyping are regarded as consequently still typo-errors, but a considerable number of them are developed into the grammatical errors and the semantic errors. Pseudo semantic errors among these errors due to typographical errors have more noticeable peculiarities than pure semantic errors between senses of surrounding context words within a sentence. These semantic errors can be detected and corrected by simple algorithm based on the co-occurrence frequency because of their prominent contextual discrepancy. I propose a method for detection and correction based on the co-occurrence frequency in order to detect semantic errors due to typo-errors. The co-occurrence frequency in proposed method is counted for only words with immediate dependency relation, and the cosine similarity measure is used in order to detect pseudo semantic errors. From the presented experimental results, the proposed method is expected to help improve the detecting rate of overall proofreading system by about 2~3%.

On "Dimension" Nouns In Korean (한국어 "크기" 명사 부류에 대하여)

  • Song, Kuen-Young;Hong, Chai-Song
    • Annual Conference on Human and Language Technology
    • /
    • 2001.10d
    • /
    • pp.260-266
    • /
    • 2001
  • 본 논문은 불어 명사의 의미 통사적 분류와 관련된 '대상부류(classes d'objets)' 이론을 바탕으로 한국어의 "크기" 명사 부류에 대한 의미적, 형식적 기준을 설정함으로써 자연언어 처리에의 활용 방안을 모색하고자 한다. 한국어의 일부 명사들은 어떤 대상 혹은 현상의 다양한 속성이 특정 차원에서 갖는 규모의 의미를 표현한다 예를 들어, '길이', '깊이', '넓이', '높이', '키', '무게', '온도', '기온' 등이 이에 해당하는데, 이들은 측정의 개념과도 밀접한 연관을 가지며, 통사적으로도 일정한 속성을 공유한다. 즉 '측정하다', '재다' 등 측정의 개념을 나타내는 동사 및 수량 표현과 더불어 일정한 통사 형식으로 실현된다는 점이다. 본 논문에서는 이러한 조건을 만족시키는 한국어 명사들을 "크기" 명사라 명명하며, "크기" 명사와 특징적으로 결합하는 '측정하다', '재다' 등의 동사를 "크기" 명사 부류에 대한 적정술어라 부른다. 또한 "크기" 명사는 결합 가능한 단위명사의 종류 및 호응 가능한 정도 형용사의 종류 등에 따라 세부 하위유형으로 분류할 수도 있다. 따라서 주로 술어와의 통사적 결합관계를 기준으로 "크기" 명사 부류를 외형적으로 한정하고, 이 부류에 속하는 개개 명사들의 통사적 세부 속성을 전자사전의 체계로 구축한다면 한국어 "크기" 명사에 대한 전반적이고 총체적인 의미적 통사적 분류와 기술이 가능해질 것이다. 한편 "크기" 명사에 대한 연구는 반드시 이들 명사를 특징지어주는 단위명사 부류의 연구와 병행되어야 한다. 본 연구는 한국어 "크기" 명사를 한정하고 분류하는 보다 엄밀하고 형식적인 기준과 그 의미 통사 정보를 체계적으로 제시해 줄 것이다. 이러한 정보들은 한국어 자동처리에 활용되어 "크기" 명사를 포함하는 구문의 자동분석 및 산출 과정에 즉각적으로 활용될 수 있을 것이다. 또한, 이러한 정보들은 현재 구축중인 세종 전자사전에도 직접 반영되고 있다.teness)은 언화행위가 성공적이라는 것이다.[J. Searle] (7) 수로 쓰인 것(상수)(象數)과 시로 쓰인 것(의리)(義理)이 하나인 것은 그 나타난 것과 나타나지 않은 것들 사이에 어떠한 들도 없음을 말한다. [(성중영)(成中英)] (8) 공통의 규범의 공통성 속에 규범적인 측면이 벌써 있다. 공통성에서 개인적이 아닌 공적인 규범으로의 전이는 규범, 가치, 규칙, 과정, 제도로의 전이라고 본다. [C. Morrison] (9) 우리의 언어사용에 신비적인 요소를 부인할 수가 없다. 넓은 의미의 발화의미(utterance meaning) 속에 신비적인 요소나 애정표시도 수용된다. 의미분석은 지금 한글을 연구하고, 그 결과에 의존하여서 우리의 실제의 생활에 사용하는 $\ulcorner$한국어사전$\lrcorner$ 등을 만드는 과정에서, 어떤 의미에서 실험되었다고 말할 수가 있는 언어과학의 연구의 결과에 의존하여서 수행되는 철학적인 작업이다. 여기에서는 하나의 철학적인 연구의 시작으로 받아들여지는 이 의미분석의 문제를 반성하여 본다.반인과 다르다는 것이 밝혀졌다. 이 결과가 옳다면 한국의 심성 어휘집은 어절 문맥에 따라서 어간이나 어근 또는 활용형 그 자체로 이루어져 있을 것이다.으며, 레드 클로버 + 혼파 초지가 건물수량과 사료가치를 높이는데 효과적이었다.\ell}$ 이었으며 , yeast extract 첨가(添加)하여 배양시(培養時)는 yeast extract 농도(濃度)가 증가(增加)함에 따라 단백질(蛋白質) 함량(含量)도 증가(增加)하였다. 7. CHS-13 균주(菌株)의 RNA 함량(含量)은 $4.92{\times}10^{-2 }\;mg/m{\ell}$이었으며 yeast extract 농도(濃度)가 증가(增加)함에 따라 증가(增加)하다가 농도(濃度) 0.2%에서 최대함량(最大含量)을 나타내고 그후는 감소(減少)하였다.

  • PDF