• Title/Summary/Keyword: Semantic Constraints

Search Result 83, Processing Time 0.019 seconds

Change Acceptable In-Depth Searching in LOD Cloud for Efficient Knowledge Expansion (효과적인 지식확장을 위한 LOD 클라우드에서의 변화수용적 심층검색)

  • Kim, Kwangmin;Sohn, Yonglak
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.171-193
    • /
    • 2018
  • LOD(Linked Open Data) cloud is a practical implementation of semantic web. We suggested a new method that provides identity links conveniently in LOD cloud. It also allows changes in LOD to be reflected to searching results without any omissions. LOD provides detail descriptions of entities to public in RDF triple form. RDF triple is composed of subject, predicates, and objects and presents detail description for an entity. Links in LOD cloud, named identity links, are realized by asserting entities of different RDF triples to be identical. Currently, the identity link is provided with creating a link triple explicitly in which associates its subject and object with source and target entities. Link triples are appended to LOD. With identity links, a knowledge achieves from an LOD can be expanded with different knowledge from different LODs. The goal of LOD cloud is providing opportunity of knowledge expansion to users. Appending link triples to LOD, however, has serious difficulties in discovering identity links between entities one by one notwithstanding the enormous scale of LOD. Newly added entities cannot be reflected to searching results until identity links heading for them are serialized and published to LOD cloud. Instead of creating enormous identity links, we propose LOD to prepare its own link policy. The link policy specifies a set of target LODs to link and constraints necessary to discover identity links to entities on target LODs. On searching, it becomes possible to access newly added entities and reflect them to searching results without any omissions by referencing the link policies. Link policy specifies a set of predicate pairs for discovering identity between associated entities in source and target LODs. For the link policy specification, we have suggested a set of vocabularies that conform to RDFS and OWL. Identity between entities is evaluated in accordance with a similarity of the source and the target entities' objects which have been associated with the predicates' pair in the link policy. We implemented a system "Change Acceptable In-Depth Searching System(CAIDS)". With CAIDS, user's searching request starts from depth_0 LOD, i.e. surface searching. Referencing the link policies of LODs, CAIDS proceeds in-depth searching, next LODs of next depths. To supplement identity links derived from the link policies, CAIDS uses explicit link triples as well. Following the identity links, CAIDS's in-depth searching progresses. Content of an entity obtained from depth_0 LOD expands with the contents of entities of other LODs which have been discovered to be identical to depth_0 LOD entity. Expanding content of depth_0 LOD entity without user's cognition of such other LODs is the implementation of knowledge expansion. It is the goal of LOD cloud. The more identity links in LOD cloud, the wider content expansions in LOD cloud. We have suggested a new way to create identity links abundantly and supply them to LOD cloud. Experiments on CAIDS performed against DBpedia LODs of Korea, France, Italy, Spain, and Portugal. They present that CAIDS provides appropriate expansion ratio and inclusion ratio as long as degree of similarity between source and target objects is 0.8 ~ 0.9. Expansion ratio, for each depth, depicts the ratio of the entities discovered at the depth to the entities of depth_0 LOD. For each depth, inclusion ratio illustrates the ratio of the entities discovered only with explicit links to the entities discovered only with link policies. In cases of similarity degrees with under 0.8, expansion becomes excessive and thus contents become distorted. Similarity degree of 0.8 ~ 0.9 provides appropriate amount of RDF triples searched as well. Experiments have evaluated confidence degree of contents which have been expanded in accordance with in-depth searching. Confidence degree of content is directly coupled with identity ratio of an entity, which means the degree of identity to the entity of depth_0 LOD. Identity ratio of an entity is obtained by multiplying source LOD's confidence and source entity's identity ratio. By tracing the identity links in advance, LOD's confidence is evaluated in accordance with the amount of identity links incoming to the entities in the LOD. While evaluating the identity ratio, concept of identity agreement, which means that multiple identity links head to a common entity, has been considered. With the identity agreement concept, experimental results present that identity ratio decreases as depth deepens, but rebounds as the depth deepens more. For each entity, as the number of identity links increases, identity ratio rebounds early and reaches at 1 finally. We found out that more than 8 identity links for each entity would lead users to give their confidence to the contents expanded. Link policy based in-depth searching method, we proposed, is expected to contribute to abundant identity links provisions to LOD cloud.

Prosodic Phrasing and Focus in Korea

  • Baek, Judy Yoo-Kyung
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.246-246
    • /
    • 1996
  • Purpose: Some of the properties of the prosodic phrasing and some acoustic and phonological effects of contrastive focus on the tonal pattern of Seoul Korean is explored based on a brief experiment of analyzing the fundamental frequency(=FO) contour of the speech of the author. Data Base and Analysis Procedures: The examples were chosen to contain mostly nasal and liquid consonants, since it is difficult to track down the formants in stops and fricatives during their corresponding consonantal intervals and stops may yield an effect of unwanted increase in the FO value due to their burst into the following vowel. All examples were recorded three times and the spectrum of the most stable repetition was generated, from which the FO contour of each sentence was obtained, the peaks with a value higher than 250Hz being interpreted as a high tone (=H). The result is then discussed within the prosodic hierarchy framework of Selkirk (1986) and compared with the tonal pattern of the Northern Kyungsang dialect of Korean reported in Kenstowicz & Sohn (1996). Prosodic Phrasing: In N.K. Korean, H never appears both on the object and on the verb in a neutral sentence, which indicates the object and the verb form a single Phonological Phrase ($={\phi}$), given that there is only one pitch peak for each $={\phi}$. However, Seoul Korean shows that both the object and the verb have H of their own, indicating that they are not contained in one $={\phi}$. This violates the Optimality constraint of Wrap-XP (=Enclose a lexical head and its arguments in one $={\phi}$), while N.K. Korean obeys the constraint by grouping a VP in a single $={\phi}$. This asymmetry can be resolved through a constraint that favors the separate grouping of each lexical category and is ranked higher than Wrap-XP in Seoul Korean but vice versa in N.K. Korean; $Align-x^{lex}$ (=Align the left edge of a lexical category with that of a $={\phi}$). (1) nuna-ka manll-ll mEk-nIn-ta ('sister-NOM garlic-ACC eat-PRES-DECL') a. (LLH) (LLH) (HLL) ----Seoul Korean b. (LLH) (LLL LHL) ----N.K. Korean Focus and Phrasing: Two major effects of contrastive focus on phonological phrasing are found in Seoul Korean: (a) the peak of an Intonatioanl Phrase (=IP) falls on the focused element; and (b) focus has the effect of deleting all the following prosodic structures. A focused element always attracts the peak of IP, showing an increase of approximately 30Hz compared with the peak of a non-focused IP. When a subject is focused, no H appears either on the object or on the verb and a focused object is never followed by a verb with H. The post-focus deletion of prosodic boundaries is forced through the interaction of StressFocus (=If F is a focus and DF is its semantic domain, the highest prominence in DF will be within F) and Rightmost-IP (=The peak of an IP projects from the rightmost $={\phi}$). First Stress-F requires the peak of IP to fall on the focused element. Then to avoid violating Rightmost-IP, all the boundaries after the focused element should delete, minimizing the number of $={\phi}$'s intervening from the right edge of IP. (2) (omitted) Conclusion: In general, there seems to be no direct alignment constraints between the syntactically focused element and the edge of $={\phi}$ determined in phonology; all the alignment effects come from a single requirement that the peak of IP projects from the rightmost $={\phi}$ as proposed in Truckenbrodt (1995).

  • PDF

A Study on Knowledge Entity Extraction Method for Individual Stocks Based on Neural Tensor Network (뉴럴 텐서 네트워크 기반 주식 개별종목 지식개체명 추출 방법에 관한 연구)

  • Yang, Yunseok;Lee, Hyun Jun;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.25-38
    • /
    • 2019
  • Selecting high-quality information that meets the interests and needs of users among the overflowing contents is becoming more important as the generation continues. In the flood of information, efforts to reflect the intention of the user in the search result better are being tried, rather than recognizing the information request as a simple string. Also, large IT companies such as Google and Microsoft focus on developing knowledge-based technologies including search engines which provide users with satisfaction and convenience. Especially, the finance is one of the fields expected to have the usefulness and potential of text data analysis because it's constantly generating new information, and the earlier the information is, the more valuable it is. Automatic knowledge extraction can be effective in areas where information flow is vast, such as financial sector, and new information continues to emerge. However, there are several practical difficulties faced by automatic knowledge extraction. First, there are difficulties in making corpus from different fields with same algorithm, and it is difficult to extract good quality triple. Second, it becomes more difficult to produce labeled text data by people if the extent and scope of knowledge increases and patterns are constantly updated. Third, performance evaluation is difficult due to the characteristics of unsupervised learning. Finally, problem definition for automatic knowledge extraction is not easy because of ambiguous conceptual characteristics of knowledge. So, in order to overcome limits described above and improve the semantic performance of stock-related information searching, this study attempts to extract the knowledge entity by using neural tensor network and evaluate the performance of them. Different from other references, the purpose of this study is to extract knowledge entity which is related to individual stock items. Various but relatively simple data processing methods are applied in the presented model to solve the problems of previous researches and to enhance the effectiveness of the model. From these processes, this study has the following three significances. First, A practical and simple automatic knowledge extraction method that can be applied. Second, the possibility of performance evaluation is presented through simple problem definition. Finally, the expressiveness of the knowledge increased by generating input data on a sentence basis without complex morphological analysis. The results of the empirical analysis and objective performance evaluation method are also presented. The empirical study to confirm the usefulness of the presented model, experts' reports about individual 30 stocks which are top 30 items based on frequency of publication from May 30, 2017 to May 21, 2018 are used. the total number of reports are 5,600, and 3,074 reports, which accounts about 55% of the total, is designated as a training set, and other 45% of reports are designated as a testing set. Before constructing the model, all reports of a training set are classified by stocks, and their entities are extracted using named entity recognition tool which is the KKMA. for each stocks, top 100 entities based on appearance frequency are selected, and become vectorized using one-hot encoding. After that, by using neural tensor network, the same number of score functions as stocks are trained. Thus, if a new entity from a testing set appears, we can try to calculate the score by putting it into every single score function, and the stock of the function with the highest score is predicted as the related item with the entity. To evaluate presented models, we confirm prediction power and determining whether the score functions are well constructed by calculating hit ratio for all reports of testing set. As a result of the empirical study, the presented model shows 69.3% hit accuracy for testing set which consists of 2,526 reports. this hit ratio is meaningfully high despite of some constraints for conducting research. Looking at the prediction performance of the model for each stocks, only 3 stocks, which are LG ELECTRONICS, KiaMtr, and Mando, show extremely low performance than average. this result maybe due to the interference effect with other similar items and generation of new knowledge. In this paper, we propose a methodology to find out key entities or their combinations which are necessary to search related information in accordance with the user's investment intention. Graph data is generated by using only the named entity recognition tool and applied to the neural tensor network without learning corpus or word vectors for the field. From the empirical test, we confirm the effectiveness of the presented model as described above. However, there also exist some limits and things to complement. Representatively, the phenomenon that the model performance is especially bad for only some stocks shows the need for further researches. Finally, through the empirical study, we confirmed that the learning method presented in this study can be used for the purpose of matching the new text information semantically with the related stocks.