• Title/Summary/Keyword: verb

Search Result 342, Processing Time 0.026 seconds

Functional MRI of Language Area (언어영역의 기능적 자기공명영상)

  • 유재욱;나동규;변홍식;노덕우;조재민;문찬홍;나덕렬;장기현
    • Investigative Magnetic Resonance Imaging
    • /
    • v.3 no.1
    • /
    • pp.53-59
    • /
    • 1999
  • Purpose : To evaluate the usefulness of functional MR imaging (fMRI) for language mapping and determination of language lateralization. Materials and Methods : Functional maps of the language area were obtained during word generation tasks and decision task in ten volunteers (7 right handed, 3 left-handed). MR examinations were performed at 1.5T scanner with EPI BOLD technique. Each task consisted of three resting periods and two activation periods with each period of 30 seconds. Total acquisition time was 162 sec. SPM program was used for the postprocessing of images. Statistical comparisons were performed by using t-statistics on a pixel-by- pixel basis after global normalization by ANCOVA. Activation areas were topographically analyzed (p>0.001) and activated pixels in each hemisphere were compared quantitatively by lateralization index. Results : Significant activation signals were demonstrated in 9 of 10 volunteers. Activation signals were found in the premotor and motor cortices, the inferior frontal, inferior parietal, and mid-temporal lobes during stimulation tasks. In the right handed seven volunteers, activation of language areas was lateralized to the left side. Verb generation task produced stronger activation in the language areas and higher value of lateralization index than noun generation task or decision task. Conclusion : fMRI could be a useful non-invasive method for language mapping and determination of language dominance.

  • PDF

The Method of Using the Automatic Word Clustering System for the Evaluation of Verbal Lexical-Semantic Network (동사 어휘의미망 평가를 위한 단어클러스터링 시스템의 활용 방안)

  • Kim Hae-Gyung;Yoon Ae-Sun
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.40 no.3
    • /
    • pp.175-190
    • /
    • 2006
  • For the recent several years, there has been much interest in lexical semantic network However it seems to be very difficult to evaluate the effectiveness and correctness of it and invent the methods for applying it into various problem domains. In order to offer the fundamental ideas about how to evaluate and utilize lexical semantic networks, we developed two automatic vol·d clustering systems, which are called system A and system B respectively. 68.455.856 words were used to learn both systems. We compared the clustering results of system A to those of system B which is extended by the lexical-semantic network. The system B is extended by reconstructing the feature vectors which are used the elements of the lexical-semantic network of 3.656 '-ha' verbs. The target data is the 'multilingual Word Net-CoroNet'. When we compared the accuracy of the system A and system B, we found that system B showed the accuracy of 46.6% which is better than that of system A. 45.3%.

Analyzing the Sentence Structure for Automatic Identification of Metadata Elements based on the Logical Semantic Structure of Research Articles (연구 논문의 의미 구조 기반 메타데이터 항목의 자동 식별 처리를 위한 문장 구조 분석)

  • Song, Min-Sun
    • Journal of the Korean Society for information Management
    • /
    • v.35 no.3
    • /
    • pp.101-121
    • /
    • 2018
  • This study proposes the analysis method in sentence semantics that can be automatically identified and processed as appropriate items in the system according to the composition of the sentences contained in the data corresponding to the logical semantic structure metadata of the research papers. In order to achieve the purpose, the structure of sentences corresponding to 'Research Objectives' and 'Research Outcomes' among the semantic structure metadata was analyzed based on the number of words, the link word types, the role of many-appeared words in sentences, and the end types of a word. As a result of this study, the number of words in the sentences was 38 in 'Research Objectives' and 212 in 'Research Outcomes'. The link word types in 'Research Objectives' were occurred in the order such as Causality, Sequence, Equivalence, In-other-word/Summary relation, and the link word types in 'Research Outcomes' were appeared in the order such as Causality, Equivalence, Sequence, In-other-word/Summary relation. Analysis target words like '역할(Role)', '요인(Factor)' and '관계(Relation)' played a similar role in both purpose and result part, but the role of '연구(Study)' was little different. Finally, the verb endings in sentences were appeared many times such as '~고자', '~였다' in 'Research Objectives', and '~었다', '~있다', '~였다' in 'Research Outcomes'. This study is significant as a fundamental research that can be utilized to automatically identify and input the metadata element reflecting the common logical semantics of research papers in order to support researchers' scholarly sensemaking.

An Emotion Scanning System on Text Documents (텍스트 문서 기반의 감성 인식 시스템)

  • Kim, Myung-Kyu;Kim, Jung-Ho;Cha, Myung-Hoon;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.12 no.4
    • /
    • pp.433-442
    • /
    • 2009
  • People are tending to buy products through the Internet rather than purchasing them from the store. Some of the consumers give their feedback on line such as reviews, replies, comments, and blogs after they purchased the products. People are also likely to get some information through the Internet. Therefore, companies and public institutes have been facing this situation where they need to collect and analyze reviews or public opinions for them because many consumers are interested in other's opinions when they are about to make a purchase. However, most of the people's reviews on web site are too numerous, short and redundant. Under these circumstances, the emotion scanning system of text documents on the web is rising to the surface. Extracting writer's opinions or subjective ideas from text exists labeled words like GI(General Inquirer) and LKB(Lexical Knowledge base of near synonym difference) in English, however Korean language is not provided yet. In this paper, we labeled positive, negative, and neutral attribute at 4 POS(part of speech) which are noun, adjective, verb, and adverb in Korean dictionary. We extract construction patterns of emotional words and relationships among words in sentences from a large training set, and learned them. Based on this knowledge, comments and reviews regarding products are classified into two classes polarities with positive and negative using SO-PMI, which found the optimal condition from a combination of 4 POS. Lastly, in the design of the system, a flexible user interface is designed to add or edit the emotional words, the construction patterns related to emotions, and relationships among the words.

  • PDF

Tense and Aspect in English (영어 시제와 상)

  • Kim, Jeong-O
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.3
    • /
    • pp.502-510
    • /
    • 2013
  • The purpose of this paper investigate the general definition of Tense and the concepts about Aspect. I consider the correlation between the lexical and the grammatical aspect. Tense is an inflection type of a verb that indicates the time, tense is grammatical categories. Tense has the current tense and the past tense in English. If we recognize aspect as a grammatical category, that the subject of in description will be confined to the grammatical expression. In contrast, when considered as a category of meaning, lexical and grammatical representation is the expression target. Therefore, aspect is stated as a grammatical category. Specially, the aspect of English, there is only a progressive tense and perfect tense. In this case, the progressive aspect is progressive tense and the perfect aspect is perfect tense. Chapter 2, I investigate the definition of the tense of many scholars and define the usage of each tense. In Chapter 3, I exhibit the definition of the Aspect. Aspect is grammatical and semantic one. Tense and Aspect is a simple grammar category, but they have a various spectrum. Therefore, As the definition of the Tense and Aspect becomes clear it will be helpful to students who are received English education. In addition, the definition about tense and aspect needs in variety of areas, more research is needed.

Part-of-speech Tagging for Hindi Corpus in Poor Resource Scenario

  • Modi, Deepa;Nain, Neeta;Nehra, Maninder
    • Journal of Multimedia Information System
    • /
    • v.5 no.3
    • /
    • pp.147-154
    • /
    • 2018
  • Natural language processing (NLP) is an emerging research area in which we study how machines can be used to perceive and alter the text written in natural languages. We can perform different tasks on natural languages by analyzing them through various annotational tasks like parsing, chunking, part-of-speech tagging and lexical analysis etc. These annotational tasks depend on morphological structure of a particular natural language. The focus of this work is part-of-speech tagging (POS tagging) on Hindi language. Part-of-speech tagging also known as grammatical tagging is a process of assigning different grammatical categories to each word of a given text. These grammatical categories can be noun, verb, time, date, number etc. Hindi is the most widely used and official language of India. It is also among the top five most spoken languages of the world. For English and other languages, a diverse range of POS taggers are available, but these POS taggers can not be applied on the Hindi language as Hindi is one of the most morphologically rich language. Furthermore there is a significant difference between the morphological structures of these languages. Thus in this work, a POS tagger system is presented for the Hindi language. For Hindi POS tagging a hybrid approach is presented in this paper which combines "Probability-based and Rule-based" approaches. For known word tagging a Unigram model of probability class is used, whereas for tagging unknown words various lexical and contextual features are used. Various finite state machine automata are constructed for demonstrating different rules and then regular expressions are used to implement these rules. A tagset is also prepared for this task, which contains 29 standard part-of-speech tags. The tagset also includes two unique tags, i.e., date tag and time tag. These date and time tags support all possible formats. Regular expressions are used to implement all pattern based tags like time, date, number and special symbols. The aim of the presented approach is to increase the correctness of an automatic Hindi POS tagging while bounding the requirement of a large human-made corpus. This hybrid approach uses a probability-based model to increase automatic tagging and a rule-based model to bound the requirement of an already trained corpus. This approach is based on very small labeled training set (around 9,000 words) and yields 96.54% of best precision and 95.08% of average precision. The approach also yields best accuracy of 91.39% and an average accuracy of 88.15%.

Splitting Algorithms and Recovery Rules for Zero Anaphora Resolution in Korean Complex Sentences (한국어 복합문에서의 제로 대용어 처리를 위한 분해 알고리즘과 복원규칙)

  • Kim, Mi-Jin;Park, Mi-Sung;Koo, Sang-Ok;Kang, Bo-Yeong;Lee, Sang-Jo
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.10
    • /
    • pp.736-746
    • /
    • 2002
  • Zero anaphora occurs frequently in Korean complex sentences, and it makes the interpretation of sentences difficult. This paper proposes splitting algorithms and zero anaphora recovery rules for the purpose of handling zero anaphora, and also presents a resolution methodology. The paper covers quotations, conjunctive sentences and embedded sentences out of the complex sentences shown in the newspaper articles, with an exclusion of embedded sentences of auxiliary verb. We manage the quotations using the equivalent noun phrase deletion rule according to subject person constraint, the nominalized embedded sentences using the equivalent noun phrase deletion rule, the adnominal embedded sentences using the relative noun phrase deletion rule and the conjunctive sentences using the conjunction reduction rule in reverse. The classified table of the endings which relate to a formation of the complex sentences is used for splitting the complex sentences, and the syntactic rules, applied when being omitted, are used in reverse for recovering zero anaphora. The presented rule showed the result of 83.53% in perfect resolution and 11.52% in partial resolution.

A Study on the Teaching and Learning of Korean Modality Expressions (한국어의 양태 표현 교육 연구 : 한국어 '-(으)ㄹ 수 있다'와 중국어 '능(能)'의 대조를 중심으로)

  • Jiang, Fei
    • Korean Educational Research Journal
    • /
    • v.40 no.1
    • /
    • pp.17-42
    • /
    • 2019
  • Modality is the psychological attitude of the speaker, which is comprised by the sentences used in every language. Modality can be broadly categorized as perceptional modality and obligatory modality. This study summarizes the previous related literatures and theoretical branches of Korean linguistic studies. The study also proposes and classifies a modal concept on the Korean language, which is aimed at aiding Chinese people who are studying Korean. It further describes characteristics and expressions of modality in both the Chinese and Korean languages. This study aims to develop an effective teaching-learning program on the basis of the contrastive analysis between Korean language's modality, "-(으)ㄹ 수 있다," and the corresponding Chinese auxiliary verb, "能." Modality is a syntax item that reflects a speaker's subjective manner. There are many grammatical facets in Korean language books and teaching materials that are modal in nature. Further, modalities in Korean language are not only numerous but also have very rich meanings and functions. Based on the contrastive analysis, this study designs an effective teaching plan for Chinese people learning the Korean language. The designed system uses specific conversational occasions as the basis of learning, and it adapts the Korean language's modal system to classroom teaching. The system is expected to be effective during classroom teaching for demonstrating and learning modality in the Korean language.

  • PDF

Case Study on the Writing of the Papers of Journal of the Korean Association for Science Education (한국과학교육학회지 논문의 글쓰기 사례 연구)

  • Han, JaeYoung
    • Journal of The Korean Association For Science Education
    • /
    • v.35 no.4
    • /
    • pp.649-663
    • /
    • 2015
  • This study investigated the current state of writing in research papers of science education with focus on the translationese and basic Korean grammar, and found a way of improving the Korean language. The science education research have characteristics of both social science and natural science, and of having more quantitative than qualitative research, which could influence the writing of the research paper. The translationese means the conventional expression originated from foreign language other than Korean. The basic Korean grammar includes 'agreement,' 'spelling, word spacing, punctuation mark,' 'causative suffix,' 'use of English or loanword,' and the translationese is divided in 'English,' 'Japanese,' and 'English and Japanese.' The sentences in nine research papers in the 'Journal of the Korean Association for Science Education' were analyzed, and the problematic sentences were discussed and provided with alternatives. The cases with high frequency include '-jeok,' 'use of English,' 'expression of the plural,' 'passive voice of the verb with -hada,' '-go inneun,' '-eul tonghayeo,' '-e daehayeo,' 'gajida,' 'genitive case marker -eui,' 'passive voice with subject of thing,' and 'causative suffix, -sikida.' Based on the results, the characteristics of writing of science education research papers were described as 'writing of quantitative research,' 'objective writing of academic research,' and 'writing of research of foreign origin.' In order to improve the writing of research paper of science education, the science education researcher should pay attention to basic Korean grammar and the translationese, and be familiar with the concrete examples of problematic cases. The results of this study could be used in the education of writing and grammar of Korean language.

A Comparative Study on Using SentiWordNet for English Twitter Sentiment Analysis (영어 트위터 감성 분석을 위한 SentiWordNet 활용 기법 비교)

  • Kang, In-Su
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.4
    • /
    • pp.317-324
    • /
    • 2013
  • Twitter sentiment analysis is to classify a tweet (message) into positive and negative sentiment class. This study deals with SentiWordNet(SWN)-based twitter sentiment analysis. SWN is a sentiment dictionary in which each sense of an English word has a positive and negative sentimental strength. There has been a variety of SWN-based sentiment feature extraction methods which typically first determine the sentiment orientation (SO) of a term in a document and then decide SO of the document from such terms' SO values. For example, for SO of a term, some calculated the maximum or average of sentiment scores of its senses, and others computed the average of the difference of positive and negative sentiment scores. For SO of a document, many researchers employ the maximum or average of terms' SO values. In addition, the above procedure may be applied to the whole set (adjective, adverb, noun, and verb) of parts-of-speech or its subset. This work provides a comparative study on SWN-based sentiment feature extraction schemes with performance evaluation on a well-known twitter dataset.