• Title/Summary/Keyword: Text data

Search Result 2,953, Processing Time 0.028 seconds

The Patterns of Students' Conceptual Changes on Force by Age (나이에 따른 학생들의 힘에 관한 개념 변화 특성)

  • Kim, Yeoun-Soo;Kwon, Jae-Sool
    • Journal of The Korean Association For Science Education
    • /
    • v.20 no.2
    • /
    • pp.221-233
    • /
    • 2000
  • Many investigators have reported difficulties in changing the high school students' misconceptions on mechanics. By one possible solution to this problem, some researchers suggested that the students should be taught mechanics at a younger age to make conceptual changes possible. because as they get older they become less willing to change their ideas. The purpose of this study was to compare the patterns of students' conceptual changes on force by age, to find out whether older students were less ready to change their conceptions than younger students. Individual interviews were carried out with 35 students (average ages 13) in middle school class and 50 students (average ages 17) in high school class near by the middle school. Those students who held the misconcetpion that "motion-implies-force (Impetus conception)" were asked to read a student-centered refutational text (anomalous data). In the immediate and delayed posttest, the types of responses of the students were analyzed to find out the patterns of student's conceptual changes on force by age. In result, first, most of students had impetus conception. Some of the students aged 13 understood the force as terminologies related with everyday experiences, while the students aged 17 understood the force as scientific terminologies. Second, there was no evidence to suggest that conceptual change is more difficult for the students aged 17 than aged 13. Third, the students aged 13 showed diverse responses (plain acceptance, critical acceptance, plain rejection, critical rejection) to the refutational text, while the students aged 17 showed restricted responses (critical acceptance, critical rejection). A month later those students who showed the plain acceptance retrogressed unscientific conceptions, while those students who showed critical acceptance maintained scientific conceptions. We did not find out any evidence to suggest that conceptual change is more difficult for older students. These results need deeper investigation on the nature of the loss of plasticity in comparison with other important variables.

  • PDF

Derivation of Green Infrastructure Planning Factors for Reducing Particulate Matter - Using Text Mining - (미세먼지 저감을 위한 그린인프라 계획요소 도출 - 텍스트 마이닝을 활용하여 -)

  • Seok, Youngsun;Song, Kihwan;Han, Hyojoo;Lee, Junga
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.49 no.5
    • /
    • pp.79-96
    • /
    • 2021
  • Green infrastructure planning represents landscape planning measures to reduce particulate matter. This study aimed to derive factors that may be used in planning green infrastructure for particulate matter reduction using text mining techniques. A range of analyses were carried out by focusing on keywords such as 'particulate matter reduction plan' and 'green infrastructure planning elements'. The analyses included Term Frequency-Inverse Document Frequency (TF-IDF) analysis, centrality analysis, related word analysis, and topic modeling analysis. These analyses were carried out via text mining by collecting information on previous related research, policy reports, and laws. Initially, TF-IDF analysis results were used to classify major keywords relating to particulate matter and green infrastructure into three groups: (1) environmental issues (e.g., particulate matter, environment, carbon, and atmosphere), target spaces (e.g., urban, park, and local green space), and application methods (e.g., analysis, planning, evaluation, development, ecological aspect, policy management, technology, and resilience). Second, the centrality analysis results were found to be similar to those of TF-IDF; it was confirmed that the central connectors to the major keywords were 'Green New Deal' and 'Vacant land'. The results from the analysis of related words verified that planning green infrastructure for particulate matter reduction required planning forests and ventilation corridors. Additionally, moisture must be considered for microclimate control. It was also confirmed that utilizing vacant space, establishing mixed forests, introducing particulate matter reduction technology, and understanding the system may be important for the effective planning of green infrastructure. Topic analysis was used to classify the planning elements of green infrastructure based on ecological, technological, and social functions. The planning elements of ecological function were classified into morphological (e.g., urban forest, green space, wall greening) and functional aspects (e.g., climate control, carbon storage and absorption, provision of habitats, and biodiversity for wildlife). The planning elements of technical function were classified into various themes, including the disaster prevention functions of green infrastructure, buffer effects, stormwater management, water purification, and energy reduction. The planning elements of the social function were classified into themes such as community function, improving the health of users, and scenery improvement. These results suggest that green infrastructure planning for particulate matter reduction requires approaches related to key concepts, such as resilience and sustainability. In particular, there is a need to apply green infrastructure planning elements in order to reduce exposure to particulate matter.

Analysis of Literatures Related to Crop Growth and Yield of Onion and Garlic Using Text-mining Approaches for Develop Productivity Prediction Models (양파·마늘 생산성 예측 모델 개발을 위한 텍스트마이닝 기법 활용 생육 및 수량 관련 문헌 분석)

  • Kim, Jin-Hee;Kim, Dae-Jun;Seo, Bo-Hun;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.374-390
    • /
    • 2021
  • Growth and yield of field vegetable crops would be affected by climate conditions, which cause a relatively large fluctuation in crop production and consumer price over years. The yield prediction system for these crops would support decision-making on policies to manage supply and demands. The objectives of this study were to compile literatures related to onion and garlic and to perform data-mining analysis, which would shed lights on the development of crop models for these major field vegetable crops in Korea. The literatures on crop growth and yield were collected from the databases operated by Research Information Sharing Service, National Science & Technology Information Service and SCOPUS. The keywords were chosen to retrieve research outcomes related to crop growth and yield of onion and garlic. These literatures were analyzed using text mining approaches including word cloud and semantic networks. It was found that the number of publications was considerably less for the field vegetable crops compared with rice. Still, specific patterns between previous research outcomes were identified using the text mining methods. For example, climate change and remote sensing were major topics of interest for growth and yield of onion and garlic. The impact of temperature and irrigation on crop growth was also assessed in the previous studies. It was also found that yield of onion and garlic would be affected by both environment and crop management conditions including sowing time, variety, seed treatment method, irrigation interval, fertilization amount and fertilizer composition. For meteorological conditions, temperature, precipitation, solar radiation and humidity were found to be the major factors in the literatures. These indicate that crop models need to take into account both environmental and crop management practices for reliable prediction of crop yield.

Exploring the Trend of Korean Creative Dance by Analyzing Research Topics : Application of Text Mining (연구주제 분석을 통한 한국창작무용 경향 탐색 : 텍스트 마이닝의 적용)

  • Yoo, Ji-Young;Kim, Woo-Kyung
    • Journal of Korea Entertainment Industry Association
    • /
    • v.14 no.6
    • /
    • pp.53-60
    • /
    • 2020
  • The study is based on the assumption that the trend of phenomena and trends in research are contextually consistent. Therefore the purpose of this study is to explore the trend of dance through the subject analysis of the Korean creative dance study by utilizing text mining. Thus, 1,291 words were analyzed in the 616 journal title, which were established on the paper search website. The collection, refining and analysis of the data were all R 3.6.0 SW. According to the study, keywords representing the times were frequently used before the 2000s, but Korean creative dance research types were also found in terms of education and physical training. Second, the frequency of keywords related to the dance troupe's performance was high after the 2000s, but it was confirmed that Choi Seung-hee was still in an important position in the study of Korean creative dance. Third, an analysis of the overall research subjects of the Korean creative dance study showed that the research on 'Art of Choi Seung-hee in the modern era' was the highest proportion. Fourth, the Hot Topics, which are rising as of 2000, appeared as 'the performance activities of the National Dance Company' and 'the choreography expression and utilization of traditional dance'. However, since the recent trend of the National Dance Company's performance is advocating 'modernization based on tradition', it has been confirmed that the trend of Korean creative dance since the 2000s has been focused on the use of traditional dance motifs. Fifth, the Cold Topic, which has been falling as of 2000, has been shown to be a study of 'dancing expressions by age'. It was judged that interest in research also decreased due to the tendency to mix various dance styles after the establishment of the genre of Korean creative dance.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • A Study on Differences of Contents and Tones of Arguments among Newspapers Using Text Mining Analysis (텍스트 마이닝을 활용한 신문사에 따른 내용 및 논조 차이점 분석)

    • Kam, Miah;Song, Min
      • Journal of Intelligence and Information Systems
      • /
      • v.18 no.3
      • /
      • pp.53-77
      • /
      • 2012
    • This study analyses the difference of contents and tones of arguments among three Korean major newspapers, the Kyunghyang Shinmoon, the HanKyoreh, and the Dong-A Ilbo. It is commonly accepted that newspapers in Korea explicitly deliver their own tone of arguments when they talk about some sensitive issues and topics. It could be controversial if readers of newspapers read the news without being aware of the type of tones of arguments because the contents and the tones of arguments can affect readers easily. Thus it is very desirable to have a new tool that can inform the readers of what tone of argument a newspaper has. This study presents the results of clustering and classification techniques as part of text mining analysis. We focus on six main subjects such as Culture, Politics, International, Editorial-opinion, Eco-business and National issues in newspapers, and attempt to identify differences and similarities among the newspapers. The basic unit of text mining analysis is a paragraph of news articles. This study uses a keyword-network analysis tool and visualizes relationships among keywords to make it easier to see the differences. Newspaper articles were gathered from KINDS, the Korean integrated news database system. KINDS preserves news articles of the Kyunghyang Shinmun, the HanKyoreh and the Dong-A Ilbo and these are open to the public. This study used these three Korean major newspapers from KINDS. About 3,030 articles from 2008 to 2012 were used. International, national issues and politics sections were gathered with some specific issues. The International section was collected with the keyword of 'Nuclear weapon of North Korea.' The National issues section was collected with the keyword of '4-major-river.' The Politics section was collected with the keyword of 'Tonghap-Jinbo Dang.' All of the articles from April 2012 to May 2012 of Eco-business, Culture and Editorial-opinion sections were also collected. All of the collected data were handled and edited into paragraphs. We got rid of stop-words using the Lucene Korean Module. We calculated keyword co-occurrence counts from the paired co-occurrence list of keywords in a paragraph. We made a co-occurrence matrix from the list. Once the co-occurrence matrix was built, we used the Cosine coefficient matrix as input for PFNet(Pathfinder Network). In order to analyze these three newspapers and find out the significant keywords in each paper, we analyzed the list of 10 highest frequency keywords and keyword-networks of 20 highest ranking frequency keywords to closely examine the relationships and show the detailed network map among keywords. We used NodeXL software to visualize the PFNet. After drawing all the networks, we compared the results with the classification results. Classification was firstly handled to identify how the tone of argument of a newspaper is different from others. Then, to analyze tones of arguments, all the paragraphs were divided into two types of tones, Positive tone and Negative tone. To identify and classify all of the tones of paragraphs and articles we had collected, supervised learning technique was used. The Na$\ddot{i}$ve Bayesian classifier algorithm provided in the MALLET package was used to classify all the paragraphs in articles. After classification, Precision, Recall and F-value were used to evaluate the results of classification. Based on the results of this study, three subjects such as Culture, Eco-business and Politics showed some differences in contents and tones of arguments among these three newspapers. In addition, for the National issues, tones of arguments on 4-major-rivers project were different from each other. It seems three newspapers have their own specific tone of argument in those sections. And keyword-networks showed different shapes with each other in the same period in the same section. It means that frequently appeared keywords in articles are different and their contents are comprised with different keywords. And the Positive-Negative classification showed the possibility of classifying newspapers' tones of arguments compared to others. These results indicate that the approach in this study is promising to be extended as a new tool to identify the different tones of arguments of newspapers.

    Twitter Issue Tracking System by Topic Modeling Techniques (토픽 모델링을 이용한 트위터 이슈 트래킹 시스템)

    • Bae, Jung-Hwan;Han, Nam-Gi;Song, Min
      • Journal of Intelligence and Information Systems
      • /
      • v.20 no.2
      • /
      • pp.109-122
      • /
      • 2014
    • People are nowadays creating a tremendous amount of data on Social Network Service (SNS). In particular, the incorporation of SNS into mobile devices has resulted in massive amounts of data generation, thereby greatly influencing society. This is an unmatched phenomenon in history, and now we live in the Age of Big Data. SNS Data is defined as a condition of Big Data where the amount of data (volume), data input and output speeds (velocity), and the variety of data types (variety) are satisfied. If someone intends to discover the trend of an issue in SNS Big Data, this information can be used as a new important source for the creation of new values because this information covers the whole of society. In this study, a Twitter Issue Tracking System (TITS) is designed and established to meet the needs of analyzing SNS Big Data. TITS extracts issues from Twitter texts and visualizes them on the web. The proposed system provides the following four functions: (1) Provide the topic keyword set that corresponds to daily ranking; (2) Visualize the daily time series graph of a topic for the duration of a month; (3) Provide the importance of a topic through a treemap based on the score system and frequency; (4) Visualize the daily time-series graph of keywords by searching the keyword; The present study analyzes the Big Data generated by SNS in real time. SNS Big Data analysis requires various natural language processing techniques, including the removal of stop words, and noun extraction for processing various unrefined forms of unstructured data. In addition, such analysis requires the latest big data technology to process rapidly a large amount of real-time data, such as the Hadoop distributed system or NoSQL, which is an alternative to relational database. We built TITS based on Hadoop to optimize the processing of big data because Hadoop is designed to scale up from single node computing to thousands of machines. Furthermore, we use MongoDB, which is classified as a NoSQL database. In addition, MongoDB is an open source platform, document-oriented database that provides high performance, high availability, and automatic scaling. Unlike existing relational database, there are no schema or tables with MongoDB, and its most important goal is that of data accessibility and data processing performance. In the Age of Big Data, the visualization of Big Data is more attractive to the Big Data community because it helps analysts to examine such data easily and clearly. Therefore, TITS uses the d3.js library as a visualization tool. This library is designed for the purpose of creating Data Driven Documents that bind document object model (DOM) and any data; the interaction between data is easy and useful for managing real-time data stream with smooth animation. In addition, TITS uses a bootstrap made of pre-configured plug-in style sheets and JavaScript libraries to build a web system. The TITS Graphical User Interface (GUI) is designed using these libraries, and it is capable of detecting issues on Twitter in an easy and intuitive manner. The proposed work demonstrates the superiority of our issue detection techniques by matching detected issues with corresponding online news articles. The contributions of the present study are threefold. First, we suggest an alternative approach to real-time big data analysis, which has become an extremely important issue. Second, we apply a topic modeling technique that is used in various research areas, including Library and Information Science (LIS). Based on this, we can confirm the utility of storytelling and time series analysis. Third, we develop a web-based system, and make the system available for the real-time discovery of topics. The present study conducted experiments with nearly 150 million tweets in Korea during March 2013.

    UX Methodology Study by Data Analysis Focusing on deriving persona through customer segment classification (데이터 분석을 통한 UX 방법론 연구 고객 세그먼트 분류를 통한 페르소나 도출을 중심으로)

    • Lee, Seul-Yi;Park, Do-Hyung
      • Journal of Intelligence and Information Systems
      • /
      • v.27 no.1
      • /
      • pp.151-176
      • /
      • 2021
    • As the information technology industry develops, various kinds of data are being created, and it is now essential to process them and use them in the industry. Analyzing and utilizing various digital data collected online and offline is a necessary process to provide an appropriate experience for customers in the industry. In order to create new businesses, products, and services, it is essential to use customer data collected in various ways to deeply understand potential customers' needs and analyze behavior patterns to capture hidden signals of desire. However, it is true that research using data analysis and UX methodology, which should be conducted in parallel for effective service development, is being conducted separately and that there is a lack of examples of use in the industry. In thiswork, we construct a single process by applying data analysis methods and UX methodologies. This study is important in that it is highly likely to be used because it applies methodologies that are actively used in practice. We conducted a survey on the topic to identify and cluster the associations between factors to establish customer classification and target customers. The research methods are as follows. First, we first conduct a factor, regression analysis to determine the association between factors in the happiness data survey. Groups are grouped according to the survey results and identify the relationship between 34 questions of psychological stability, family life, relational satisfaction, health, economic satisfaction, work satisfaction, daily life satisfaction, and residential environment satisfaction. Second, we classify clusters based on factors affecting happiness and extract the optimal number of clusters. Based on the results, we cross-analyzed the characteristics of each cluster. Third, forservice definition, analysis was conducted by correlating with keywords related to happiness. We leverage keyword analysis of the thumb trend to derive ideas based on the interest and associations of the keyword. We also collected approximately 11,000 news articles based on the top three keywords that are highly related to happiness, then derived issues between keywords through text mining analysis in SAS, and utilized them in defining services after ideas were conceived. Fourth, based on the characteristics identified through data analysis, we selected segmentation and targetingappropriate for service discovery. To this end, the characteristics of the factors were grouped and selected into four groups, and the profile was drawn up and the main target customers were selected. Fifth, based on the characteristics of the main target customers, interviewers were selected and the In-depthinterviews were conducted to discover the causes of happiness, causes of unhappiness, and needs for services. Sixth, we derive customer behavior patterns based on segment results and detailed interviews, and specify the objectives associated with the characteristics. Seventh, a typical persona using qualitative surveys and a persona using data were produced to analyze each characteristic and pros and cons by comparing the two personas. Existing market segmentation classifies customers based on purchasing factors, and UX methodology measures users' behavior variables to establish criteria and redefine users' classification. Utilizing these segment classification methods, applying the process of producinguser classification and persona in UX methodology will be able to utilize them as more accurate customer classification schemes. The significance of this study is summarized in two ways: First, the idea of using data to create a variety of services was linked to the UX methodology used to plan IT services by applying it in the hot topic era. Second, we further enhance user classification by applying segment analysis methods that are not currently used well in UX methodologies. To provide a consistent experience in creating a single service, from large to small, it is necessary to define customers with common goals. To this end, it is necessary to derive persona and persuade various stakeholders. Under these circumstances, designing a consistent experience from beginning to end, through fast and concrete user descriptions, would be a very effective way to produce a successful service.

    Utilizing Unlabeled Documents in Automatic Classification with Inter-document Similarities (문헌간 유사도를 이용한 자동분류에서 미분류 문헌의 활용에 관한 연구)

    • Kim, Pan-Jun;Lee, Jae-Yun
      • Journal of the Korean Society for information Management
      • /
      • v.24 no.1 s.63
      • /
      • pp.251-271
      • /
      • 2007
    • This paper studies the problem of classifying documents with labeled and unlabeled learning data, especially with regards to using document similarity features. The problem of using unlabeled data is practically important because in many information systems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available. There are two steps In general semi-supervised learning algorithm. First, it trains a classifier using the available labeled documents, and classifies the unlabeled documents. Then, it trains a new classifier using all the training documents which were labeled either manually or automatically. We suggested two types of semi-supervised learning algorithm with regards to using document similarity features. The one is one step semi-supervised learning which is using unlabeled documents only to generate document similarity features. And the other is two step semi-supervised learning which is using unlabeled documents as learning examples as well as similarity features. Experimental results, obtained using support vector machines and naive Bayes classifier, show that we can get improved performance with small labeled and large unlabeled documents then the performance of supervised learning which uses labeled-only data. When considering the efficiency of a classifier system, the one step semi-supervised learning algorithm which is suggested in this study could be a good solution for improving classification performance with unlabeled documents.

    Development of the Artwork using Music Visualization based on Sentiment Analysis of Lyrics (가사 텍스트의 감성분석에 기반 한 음악 시각화 콘텐츠 개발)

    • Kim, Hye-Ran
      • The Journal of the Korea Contents Association
      • /
      • v.20 no.10
      • /
      • pp.89-99
      • /
      • 2020
    • In this study, we tried to produce moving-image works through sentiment analysis of music. First, Google natural language API was used for the sentiment analysis of lyrics, then the result was applied to the image visualization rules. In prior engineering researches, text-based sentiment analysis has been conducted to understand users' emotions and attitudes by analyzing users' comments and reviews in social media. In this study, the data was used as a material for the creation of artworks so that it could be used for aesthetic expressions. From the machine's point of view, emotions are substituted with numbers, so there is a limit to normalization and standardization. Therefore, we tried to overcome these limitations by linking the results of sentiment analysis of lyrics data with the rules of formative elements in visual arts. This study aims to transform existing traditional art works such as literature, music, painting, and dance to a new form of arts based on the viewpoint of the machine, while reflecting the current era in which artificial intelligence even attempts to create artworks that are advanced mental products of human beings. In addition, it is expected that it will be expanded to an educational platform that facilitates creative activities, psychological analysis, and communication for people with developmental disabilities who have difficulty expressing emotions.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.