• Title/Summary/Keyword: co-occurrence words

Search Result 74, Processing Time 0.031 seconds

Fillers in the Hong Kong Corpus of Spoken English (HKCSE)

  • Seto, Andy
    • Asia Pacific Journal of Corpus Research
    • /
    • v.2 no.1
    • /
    • pp.13-22
    • /
    • 2021
  • The present study employed an analytical framework that is characterised by a synthesis of quantitative and qualitative analyses with a specially designed computer software SpeechActConc to examine speech acts in business communication. The naturally occurring data from the audio recordings and the prosodic transcriptions of the business sub-corpora of the HKCSE (prosodic) are manually annotated with a speech act taxonomy for finding out the frequency of fillers, the co-occurring patterns of fillers with other speech acts, and the linguistic realisations of fillers. The discoursal function of fillers to sustain the discourse or to hold the floor has diverse linguistic realisations, ranging from a sound (e.g. 'uhuh') and a word (e.g. 'well') to sounds (e.g. 'um er') and words, namely phrase ('sort of') and clause (e.g. 'you know'). Some are even combinations of sound(s) and word(s) (e.g. 'and um', 'yes er um', 'sort of erm'). Among the top five frequent linguistic realisations of fillers, 'er' and 'um' are the most common ones found in all the six genres with relatively higher percentages of occurrence. The remaining more frequent realisations consist of clause ('you know'), word ('yeah') and sound ('erm'). These common forms are syntactically simpler than the less frequent realisations found in the genres. The co-occurring patterns of fillers and other speech acts are diverse. The more common co-occurring speech acts with fillers include informing and answering. The findings show that fillers are not only frequently used by speakers in spontaneous conversation but also mostly represented in sounds or non-linguistic realisations.

Measurement of Document Similarity using Word and Word-Pair Frequencies (단어 및 단어쌍 별 빈도수를 이용한 문서간 유사도 측정)

  • 김혜숙;박상철;김수형
    • Proceedings of the IEEK Conference
    • /
    • 2003.07d
    • /
    • pp.1311-1314
    • /
    • 2003
  • In this paper, we propose a method to measure document similarity. First, we have exploited single-term method that extracts nouns by using a lexical analyzer as a preprocessing step to match one index to one noun. In spite of irrelevance between documents, possibility of increasing document similarity is high with this method. For this reason, a term-phrase method has been reported. This method constructs co-occurrence between two words as an index to measure document similarity. In this paper, we tried another method that combine these two methods to compensate the problems in these two methods. Six types of features are extracted from two input documents, and they are fed into a neural network to calculate the final value of document similarity. Reliability of our method has been proved by an experiment of document retrieval.

  • PDF

Korean Mobile Spam Filtering System Considering Characteristics of Text Messages (문자메시지의 특성을 고려한 한국어 모바일 스팸필터링 시스템)

  • Sohn, Dae-Neung;Lee, Jung-Tae;Lee, Seung-Wook;Shin, Joong-Hwi;Rim, Hae-Chang
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.7
    • /
    • pp.2595-2602
    • /
    • 2010
  • This paper introduces a mobile spam filtering system that considers the style of short text messages sent to mobile phones for detecting spam. The proposed system not only relies on the occurrence of content words as previously suggested but additionally leverages the style information to reduce critical cases in which legitimate messages containing spam words are mis-classified as spam. Moreover, the accuracy of spam classification is improved by normalizing the messages through the correction of word spacing and spelling errors. Experiment results using real world Korean text messages show that the proposed system is effective for Korean mobile spam filtering.

Hypernetwork-based Natural Language Sentence Generation by Word Relation Pattern Learning (단어 간 관계 패턴 학습을 통한 하이퍼네트워크 기반 자연 언어 문장 생성)

  • Seok, Ho-Sik;Bootkrajang, Jakramate;Zhang, Byoung-Tak
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.3
    • /
    • pp.205-213
    • /
    • 2010
  • We introduce a natural language sentence generation (NLG) method based on learning of word-association patterns. Existing NLG methods assume the inherent grammar rules or use template based method. Contrary to the existing NLG methods, the presented method learns the words-association patterns using only the co-occurrence of words without additional information such as tagging. We employ the hypernetwork method to analyze and represent the words-association patterns. As training going on, the model complexity is increased. After completing each training phase, natural language sentences are generated using the learned hyperedges. The number of grammatically plausible sentences increases after each training phase. We confirm that the proposed method has a potential for learning grammatical properties of training corpuses by comparing the diversity of grammatical rules of training corpuses and the generated sentences.

A Topic Analysis of SW Education Textdata Using R (R을 활용한 SW교육 텍스트데이터 토픽분석)

  • Park, Sunju
    • Journal of The Korean Association of Information Education
    • /
    • v.19 no.4
    • /
    • pp.517-524
    • /
    • 2015
  • In this paper, to find out the direction of interest related to the SW education, SW education news data were gathered and its contents were analyzed. The topic analysis of SW education news was performed by collecting the data of July 23, 2013 to October 19, 2015. By analyzing the relationship among the most mentioned top 20 words with the web crawling using R, the result indicated that the 20 words are the closely relevant data as the thickness of the node size of the 20 words was balancing each other in the co-occurrence matrix graph focusing on the 'SW education' word. Moreover, our analysis revealed that the data were mainly composed of the topics about SW talent, SW support Program, SW educational mandate, SW camp, SW industry and the job creation. This could be used for big data analysis to find out the thoughts and interests of such people in the SW education.

Text Network Analysis of Newspaper Articles on Life-sustaining Treatments (연명의료 관련 신문 기사의 텍스트네트워크분석)

  • Park, Eun-Jun;Ahn, Dae Woong;Park, Chan Sook
    • Research in Community and Public Health Nursing
    • /
    • v.29 no.2
    • /
    • pp.244-256
    • /
    • 2018
  • Purpose: This study tried to understand discourses of life-sustaining treatments in general daily and healthcare newspapers. Methods: A text-network analysis was conducted using the NetMiner program. Firstly, 572 articles from 11 daily newspapers and 258 articles from 8 healthcare newspapers were collected, which were published from August 2013 to October 2016. Secondly, keywords (semantic morphemes) were extracted from the articles and rearranged by removing stop-words, refining similar words, excluding non-relevant words, and defining meaningful phrases. Finally, co-occurrence matrices of the keywords with a frequency of 30 times or higher were developed and statistical measures-indices of degree and betweenness centrality, ego-networks, and clustering-were obtained. Results: In the general daily and healthcare newspapers, the top eight core keywords were common: "patients," "death," "LST (life-sustaining treatments)," "hospice palliative care," "hospitals," "family," "opinion," and "withdrawal." There were also common subtopics shared by the general daily and healthcare newspapers: withdrawal of LST, hospice palliative care, National Bioethics Review Committee, and self-determination and proxy decision of patients and family. Additionally, the general daily newspapers included diverse social interest or events like well-dying, euthanasia, and the death of farmer Baek Nam-ki, whereas the healthcare newspapers discussed problems of the relevant laws, and insufficient infrastructure and low reimbursement for hospice-palliative care. Conclusion: The discourse that withdrawal of futile LST should be allowed according to the patient's will was consistent in the newspapers. Given that newspaper articles influence knowledge and attitudes of the public, RNs are recommended to participate actively in public communication on LST.

Comparative analysis on design key-word of the four major international fashion collections - focus on 2018 fashion collection - (4대 해외 패션 컬렉션의 디자인 key-word 비교분석 - 2018년 패션 컬렉션을 중심으로 -)

  • Kim, Sae-Bom;Lee, Eun-Suk
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.21 no.3
    • /
    • pp.109-119
    • /
    • 2019
  • The purpose of this study is to examine fashion trends and the direction of the four fashion collections by analyzing the design key-words of the four major international fashion collections in 2018. The data of this study was collected by extracting the key-words from Marie Claire Korea in 2018, with the total of the collected data numbering 2,144. The data was analyzed by text mining using the R program and word-cloud, and a co-occurrence network analysis was conducted. The results of this study are as follows: First, the key-words of fashion collection designs in 2018 were fringe and ruffle detail, silk and denim fabric, vivid color, stripe and check pattern, pants suit item, and oversized silhouette, focusing on romanticism and sport. Second, seasonal characteristics of the fashion collections were pastel colors in S/S, primary and vivid colors in F/W. Details were embroidery and cutouts in S/S, patchwork and fringe in F/W. Third, the design trends of the four major fashion collections were presented in the Paris collection: stripes, check patterns, embroidery, lace, tailoring, draping, romanticism, and glamor. In the Milan collection, checks, prints, denim, and minidresses reflected sport and romanticism. The London collection included fringe, ruffles, floral patterns, flower patterns, and romanticism. The New York collections included vivid colors, neon colors, pastel colors, oversize silhouettes, bodysuits, and long dresses.

Text Mining Driven Content Analysis of Social Perception on Schizophrenia Before and After the Revision of the Terminology (조현병과 정신분열병에 대한 뉴스 프레임 분석을 통해 본 사회적 인식의 변화)

  • Kim, Hyunji;Park, Seojeong;Song, Chaemin;Song, Min
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.53 no.4
    • /
    • pp.285-307
    • /
    • 2019
  • In 2011, the Korean Medical Association revised the name of schizophrenia to remove the social stigma for the sick. Although it has been about nine years since the revision of the terminology, no studies have quantitatively analyzed how much social awareness has changed. Thus, this study investigates the changes in social awareness of schizophrenia caused by the revision of the disease name by analyzing Naver news articles related to the disease. For text analysis, LDA topic modeling, TF-IDF, word co-occurrence, and sentiment analysis techniques were used. The results showed that social awareness of the disease was more negative after the revision of the terminology. In addition, social awareness of the former term among two terms used after the revision was more negative. In other words, the revision of the disease did not resolve the stigma.

Improved Multidimensional Scaling Techniques Considering Cluster Analysis: Cluster-oriented Scaling (클러스터링을 고려한 다차원척도법의 개선: 군집 지향 척도법)

  • Lee, Jae-Yun
    • Journal of the Korean Society for information Management
    • /
    • v.29 no.2
    • /
    • pp.45-70
    • /
    • 2012
  • There have been many methods and algorithms proposed for multidimensional scaling to mapping the relationships between data objects into low dimensional space. But traditional techniques, such as PROXSCAL or ALSCAL, were found not effective for visualizing the proximities between objects and the structure of clusters of large data sets have more than 50 objects. The CLUSCAL(CLUster-oriented SCALing) technique introduced in this paper differs from them especially in that it uses cluster structure of input data set. The CLUSCAL procedure was tested and evaluated on two data sets, one is 50 authors co-citation data and the other is 85 words co-occurrence data. The results can be regarded as promising the usefulness of CLUSCAL method especially in identifying clusters on MDS maps.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.