• Title/Summary/Keyword: Text frequency analysis

Search Result 454, Processing Time 0.03 seconds

Research trends over 10 years (2010-2021) in infant and toddler rearing behavior by family caregivers in South Korea: text network and topic modeling

  • In-Hye Song;Kyung-Ah Kang
    • Child Health Nursing Research
    • /
    • v.29 no.3
    • /
    • pp.182-194
    • /
    • 2023
  • Purpose: This study analyzed research trends in infant and toddler rearing behavior among family caregivers over a 10-year period (2010-2021). Methods: Text network analysis and topic modeling were employed on data collected from relevant papers, following the extraction and refinement of semantic morphemes. A semantic-centered network was constructed by extracting words from 2,613 English-language abstracts. Data analysis was performed using NetMiner 4.5.0. Results: Frequency analysis, degree centrality, and eigenvector centrality all revealed the terms ''scale," ''program," and ''education" among the top 10 keywords associated with infant and toddler rearing behaviors among family caregivers. The keywords extracted from the analysis were divided into two clusters through cohesion analysis. Additionally, they were classified into two topic groups using topic modeling: "program and evaluation" (64.37%) and "caregivers' role and competency in child development" (35.63%). Conclusion: The roles and competencies of family caregivers are essential for the development of infants and toddlers. Intervention programs and evaluations are necessary to improve rearing behaviors. Future research should determine the role of nurses in supporting family caregivers. Additionally, it should facilitate the development of nursing strategies and intervention programs to promote positive rearing practices.

Semantic Network Analysis of Online News and Social Media Text Related to Comprehensive Nursing Care Service (간호간병통합서비스 관련 온라인 기사 및 소셜미디어 빅데이터의 의미연결망 분석)

  • Kim, Minji;Choi, Mona;Youm, Yoosik
    • Journal of Korean Academy of Nursing
    • /
    • v.47 no.6
    • /
    • pp.806-816
    • /
    • 2017
  • Purpose: As comprehensive nursing care service has gradually expanded, it has become necessary to explore the various opinions about it. The purpose of this study is to explore the large amount of text data regarding comprehensive nursing care service extracted from online news and social media by applying a semantic network analysis. Methods: The web pages of the Korean Nurses Association (KNA) News, major daily newspapers, and Twitter were crawled by searching the keyword 'comprehensive nursing care service' using Python. A morphological analysis was performed using KoNLPy. Nodes on a 'comprehensive nursing care service' cluster were selected, and frequency, edge weight, and degree centrality were calculated and visualized with Gephi for the semantic network. Results: A total of 536 news pages and 464 tweets were analyzed. In the KNA News and major daily newspapers, 'nursing workforce' and 'nursing service' were highly rated in frequency, edge weight, and degree centrality. On Twitter, the most frequent nodes were 'National Health Insurance Service' and 'comprehensive nursing care service hospital.' The nodes with the highest edge weight were 'national health insurance,' 'wards without caregiver presence,' and 'caregiving costs.' 'National Health Insurance Service' was highest in degree centrality. Conclusion: This study provides an example of how to use atypical big data for a nursing issue through semantic network analysis to explore diverse perspectives surrounding the nursing community through various media sources. Applying semantic network analysis to online big data to gather information regarding various nursing issues would help to explore opinions for formulating and implementing nursing policies.

Text Mining of Successful Casebook of Agricultural Settlement in Graduates of Korea National College of Agriculture and Fisheries - Frequency Analysis and Word Cloud of Key Words - (한국농수산대학 졸업생 영농정착 성공 사례집의 Text Mining - 주요단어의 빈도 분석 및 word cloud -)

  • Joo, J.S.;Kim, J.S.;Park, S.Y.;Song, C.Y.
    • Journal of Practical Agriculture & Fisheries Research
    • /
    • v.20 no.2
    • /
    • pp.57-72
    • /
    • 2018
  • In order to extract meaningful information from the excellent farming settlement cases of young farmers published by KNCAF, we studied the key words with text mining and created a word cloud for visualization. First, in the text mining results for the entire sample, the words 'CEO', 'corporate executive', 'think', 'self', 'start', 'mind', and 'effort' are the words with high frequency among the top 50 core words. Their ability to think, judge and push ahead with themselves is a result of showing that they have ability of to be managers or managers. And it is a expression of how they manages to achieve their dream without giving up their dream. The high frequency of words such as "father" and "parent" is due to the high ratio of parents' cooperation and succession. Also 'KNCAF', 'university', 'graduation' and 'study' are the results of their high educational awareness, and 'organic farming' and 'eco-friendly' are the result of the interest in eco-friendly agriculture. In addition, words related to the 6th industry such as 'sales' and 'experience' represent their efforts to revitalize farming and fishing villages. Meanwhile, 'internet', 'blog', 'online', 'SNS', 'ICT', 'composite' and 'smart' were not included in the top 50. However, the fact that these words were extracted without omission shows that young farmers are increasingly interested in the scientificization and high-tech of agriculture and fisheries Next, as a result of grouping the top 50 key words by crop, the words 'facilities' in livestock, vegetables and aquatic crops, the words 'equipment' and 'machine' in food crops were extracted as main words. 'Eco-friendly' and 'organic' appeared in vegetable crops and food crops, and 'organic' appeared in fruit crops. The 'worm' of eco-friendly farming method appeared in the food crops, and the 'certification', which means excellent agricultural and marine products, appeared only in the fishery crops. 'Production', which is related to '6th industry', appeared in all crops, 'processing' and 'distribution' appeared in the fruit crops, and 'experience' appeared in the vegetable crops, food crops and fruit crops. To visualize the extracted words by text mining, we created a word cloud with the entire samples and each crop sample. As a result, we were able to judge the meaning of excellent practices, which are unstructured text, by character size.

Mass Media and Social Media Agenda Analysis Using Text Mining : focused on '5-day Rotation Mask Distribution System' (텍스트 마이닝을 활용한 매스 미디어와 소셜 미디어 의제 분석 : '마스크 5부제'를 중심으로)

  • Lee, Sae-Mi;Ryu, Seung-Eui;Ahn, Soonjae
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.6
    • /
    • pp.460-469
    • /
    • 2020
  • This study analyzes online news articles and cafe articles on the '5-day Rotation Mask Distribution System', which is emerging as a recent issue due to the COVID-19 incident, to identify the mass media and social media agendas containing media and public reactions. This study figured out the difference between mass media and social media. For analysis, we collected 2,096 full text articles from Naver and 1,840 posts from Naver Cafe, and conducted word frequency analysis, word cloud, and LDA topic modeling analysis through data preprocessing and refinement. As a result of analysis, social media showed real-life topics such as 'family members' purchase', 'the postponement of school opening', ' mask usage', and 'mask purchase', reflecting the characteristics of personal media. Social media was found to play a role of exchanging personal opinions, emotions, and information rather than delivering information. With the application of the research method applied to this study, social issues can be publicized through various media analysis and used as a reference in the process of establishing a policy agenda that evolves into a government agenda.

A Study on Data Cleansing Techniques for Word Cloud Analysis of Text Data (텍스트 데이터 워드클라우드 분석을 위한 데이터 정제기법에 관한 연구)

  • Lee, Won-Jo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.745-750
    • /
    • 2021
  • In Big data visualization analysis of unstructured text data, raw data is mostly large-capacity, and analysis techniques cannot be applied without cleansing it unstructured. Therefore, from the collected raw data, unnecessary data is removed through the first heuristic cleansing process and Stopwords are removed through the second machine cleansing process. Then, the frequency of the vocabulary is calculated, visualized using the word cloud technique, and key issues are extracted and informationalized, and the results are analyzed. In this study, we propose a new Stopword cleansing technique using an external Stopword set (DB) in Python word cloud, and derive the problems and effectiveness of this technique through practical case analysis. And, through this verification result, the utility of the practical application of word cloud analysis applying the proposed cleansing technique is presented.

A Study on the Perception of Metaverse Fashion Using Big Data Analysis

  • Hosun Lim
    • Fashion & Textile Research Journal
    • /
    • v.25 no.1
    • /
    • pp.72-81
    • /
    • 2023
  • As changes in social and economic paradigms are accelerating, and non-contact has become the new normal due to the COVID-19 pandemic, metaverse services that build societies in online activities and virtual reality are spreading rapidly. This study analyzes the perception and trend of metaverse fashion using big data. TEXTOM was used to extract metaverse and fashion-related words from Naver and Google and analyze their frequency and importance. Additionally, structural equivalence analysis based on the derived main words was conducted to identify the perception and trend of metaverse fashion. The following results were obtained: First, term frequency(TF) analysis revealed the most frequently appearing words were "metaverse," "fashion," "virtual," "brand," "platform," "digital," "world," "Zepeto," "company," and "game." After analyzing TF-inverse document frequency(TF-IDF), "virtual" was the most important, followed by "brand," "platform," "Zepeto," "digital," "world," "industry," "game," "fashion show," and "industry." "Metaverse" and "fashion" were found to have a high TF but low TF-IDF. Further, words such as "virtual," "brand," "platform," "Zepeto," and "digital" had a higher TF-IDF ranking than TF, indicating that they had high importance in the text. Second, convergence of iterated correlations analysis using UNICET revealed four clusters, classified as "virtual world," "metaverse distribution platform," "fashion contents technology investment," and "metaverse fashion week." Fashion brands are hosting virtual fashion shows and stores on metaverse platforms where the virtual and real worlds coexist, and investment in developing metaverse-related technologies is under way.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Step-by-step Approach for Effective Korean Unknown Word Recognition (한국어 미등록어 인식을 위한 단계별 접근방법)

  • Park, So-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2009.05a
    • /
    • pp.369-372
    • /
    • 2009
  • Recently, newspapers as well as web documents include many newly coined words such as "mid"(meaning "American drama" since "mi" means "America" in Korean and "d" refers to the "d" of drama) and "anseup"(meaning "pathetic" since "an" and "seup" literally mean eyeballs and moist respectively). However, these words cause a Korean analyzing system's performance to decrease. In order to recognize these unknown word automatically, this paper propose a step-by-step approach consisting of an unknown noun recognition phase based on full text analysis, an unknown verb recognition phase based on web document frequency, and an unknown noun recognition phase based on web document frequency. The proposed approach includes the phase based on full text analysis to recognize accurately the unknown words occurred once and again in a document. Also, the proposed approach includes two phases based on web document frequency to recognize broadly the unknown words occurred once in the document. Besides, the proposed model divides between an unknown noun recognition phase and an unknown verb recognition phase to recognize various unknown words. Experimental results shows that the proposed approach improves precision 1.01% and recall 8.50% as compared with a previous approach.

  • PDF

A Comparative Study of a New Approach to Keyword Analysis: Focusing on NBC (키워드 분석에 대한 최신 접근법 비교 연구: 성경 코퍼스를 중심으로)

  • Ha, Myoungho
    • Journal of Digital Convergence
    • /
    • v.19 no.7
    • /
    • pp.33-39
    • /
    • 2021
  • This paper aims to analyze lexical properties of keyword lists extracted from NLT Old Testament Corpus(NOTC), NLT New Testament Corpus(NNTC), and The NLT Bible Corpus(NBC) and identify that text dispersion keyness is more effective than corpus frequency keyness. For this purpose, NOTC including around 570,000 running words and NNTC about 200,000 were compiled after downloading the files from NLT website of Bible Hub. Scott's (2020) WordSmith 8.0 was utilized to extract keyword lists through comparing a target corpus and a reference corpus. The result demonstrated that text dispersion keyness showed lexical properties of keyword lists better than corpus frequency keyness and that the former was a superior measure for generating optimal keyword lists to fully meet content-generalizability and content distinctiveness.

A Performance Analysis Based on Hadoop Application's Characteristics in Cloud Computing (클라우드 컴퓨팅에서 Hadoop 애플리케이션 특성에 따른 성능 분석)

  • Keum, Tae-Hoon;Lee, Won-Joo;Jeon, Chang-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.5
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper, we implement a Hadoop based cluster for cloud computing and evaluate the performance of this cluster based on application characteristics by executing RandomTextWriter, WordCount, and PI applications. A RandomTextWriter creates given amount of random words and stores them in the HDFS(Hadoop Distributed File System). A WordCount reads an input file and determines the frequency of a given word per block unit. PI application induces PI value using the Monte Carlo law. During simulation, we investigate the effect of data block size and the number of replications on the execution time of applications. Through simulation, we have confirmed that the execution time of RandomTextWriter was proportional to the number of replications. However, the execution time of WordCount and PI were not affected by the number of replications. Moreover, the execution time of WordCount was optimum when the block size was 64~256MB. Therefore, these results show that the performance of cloud computing system can be enhanced by using a scheduling scheme that considers application's characteristics.