• Title/Summary/Keyword: Text Collection

Search Result 300, Processing Time 0.028 seconds

A Study on Establishing a Market Entry Strategy for the Satellite Industry Using Future Signal Detection Techniques (미래신호 탐지 기법을 활용한 위성산업 시장의 진입 전략 수립 연구)

  • Sehyoung Kim;Jaehyeong Park;Hansol Lee;Juyoung Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.249-265
    • /
    • 2023
  • Recently, the satellite industry has been paying attention to the private-led 'New Space' paradigm, which is a departure from the traditional government-led industry. The space industry, which is considered to be the next food industry, is still receiving relatively little attention in Korea compared to the global market. Therefore, the purpose of this study is to explore future signals that can help determine the market entry strategies of private companies in the domestic satellite industry. To this end, this study utilizes the theoretical background of future signal theory and the Keyword Portfolio Map method to analyze keyword potential in patent document data based on keyword growth rate and keyword occurrence frequency. In addition, news data was collected to categorize future signals into first symptom and early information, respectively. This is utilized as an interpretive indicator of how the keywords reveal their actual potential outside of patent documents. This study describes the process of data collection and analysis to explore future signals and traces the evolution of each keyword in the collected documents from a weak signal to a strong signal by specifically visualizing how it can be used through the visualization of keyword maps. The process of this research can contribute to the methodological contribution and expansion of the scope of existing research on future signals, and the results can contribute to the establishment of new industry planning and research directions in the satellite industry.

Detecting Weak Signals for Carbon Neutrality Technology using Text Mining of Web News (탄소중립 기술의 미래신호 탐색연구: 국내 뉴스 기사 텍스트데이터를 중심으로)

  • Jisong Jeong;Seungkook Roh
    • Journal of Industrial Convergence
    • /
    • v.21 no.5
    • /
    • pp.1-13
    • /
    • 2023
  • Carbon neutrality is the concept of reducing greenhouse gases emitted by human activities and making actual emissions zero through removal of remaining gases. It is also called "Net-Zero" and "carbon zero". Korea has declared a "2050 Carbon Neutrality policy" to cope with the climate change crisis. Various carbon reduction legislative processes are underway. Since carbon neutrality requires changes in industrial technology, it is important to prepare a system for carbon zero. This paper aims to understand the status and trends of global carbon neutrality technology. Therefore, ROK's web platform "www.naver.com." was selected as the data collection scope. Korean online articles related to carbon neutrality were collected. Carbon neutrality technology trends were analyzed by future signal methodology and Word2Vec algorithm which is a neural network deep learning technology. As a result, technology advancement in the steel and petrochemical sectors, which are carbon over-release industries, was required. Investment feasibility in the electric vehicle sector and technology advancement were on the rise. It seems that the government's support for carbon neutrality and the creation of global technology infrastructure should be supported. In addition, it is urgent to cultivate human resources, and possible to confirm the need to prepare support policies for carbon neutrality.

Media exposure analysis of official sponsors and general companies of mega sport event (메가 스포츠이벤트의 공식스폰서와 일반기업의 미디어 노출 분석)

  • Kim, Joo-Hak;Cho, Sun-Mi
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.4
    • /
    • pp.171-181
    • /
    • 2018
  • As the proportion of sports events in the sports industry grows, the official sponsor market for sports events is also increasing. But because official sponsors are limited and expensive, some companies approach sporting events by way of Ambush marketing. This study is to analyze the differences of media exposure between official sponsors and general companies of mega sport events. To accomplish the purpose of the study, we collected text articles and analyzed them from the period of 2016 Rio Olympics, one year before the Olympics and one year after the Olympics. Web crawling was performed using Python for the collection of articles. Morphological and frequency analysis was performed using the KoNLP package and the TM package of statistical program R. In addition, the opinions of the related experts group were gathered to classify the companies or organizations in the media as the Organizing Committees for the Olympic Games(OCOGs), official sponsor, and general companies. As a result of the analysis, 5,220 times appeared related to the OCOGs, 7,845 times appeared related to the official sponsor, and 7,028 times appeared related to general companies. There isn't much difference in the frequency of exposure between official sponsors and general companies. It implies that Ambush marketing is recognized as a strategic marketing technique. The International Olympic Committee(IOC) has to recognize these social phenomena and establish reasonable standards for the marketing activities of official sponsors and general companies. And this study will serve as a basis for fair sponsor activities or marketing activities of sports events.

The Effects of Mental Health Nursing Simulation Practice Using Standardized Patients on Learning Outcomes -Learning Motivation, Learning Self-Efficacy, Learning Satisfaction, Transfer Motivation- (표준화 환자를 활용한 정신간호 시뮬레이션 실습 교육 효과 -학습동기, 학습자기효능감, 학습만족도, 전이동기-)

  • Kim Namsuk;Song Ji-Hyeun
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.4
    • /
    • pp.259-268
    • /
    • 2023
  • The purpose of this study was to verify the effectiveness of mental simulation practice training using standardized patients for nursing students. This study is a single-group pre- and post-design study, and for data collection, a structured questionnaire was provided to 95 nursing students from a university located in J. The collected data was analyzed using the SPSS/WIN 27.0 program. Results of the study The mental simulation practice training program using standardized patients improved the subject's learning motivation (t=-2.011, p=.046), learning self-efficacy (t=-2.225, p=.027), and learning satisfaction (t=-). 3.428, p=.001) and transfer motivation (t=-2.628, p=.009). In addition, as a result of analyzing the self-assessment contents by text mining, words related to mental simulation practice education using standardized patients included situation, experience, acting, communication, scenario, and mental nursing clinical practice, and words related to satisfaction were actual, There was help, response, understanding, variety, etc. As a result of this study, an environment similar to the actual situation was implemented, and the mental simulation training program applying various cases was found to be effective in practical education of nursing students, so it is necessary to actively utilize it to improve the ability to adapt to the field in the future.

A Study on Tourism Behavior in the New normal Era Using Big Data (빅데이터를 활용한 뉴노멀(New normal)시대의 관광행태 변화에 관한 연구)

  • Kyoung-mi Yoo;Jong-cheon Kang;Youn-hee Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.167-181
    • /
    • 2023
  • This study utilized TEXTOM, a social network analysis program to analyze changes in current tourism behavior after travel restrictions were eased after the outbreak of COVID-19. Data on the keywords 'domestic travel' and 'overseas travel' were collected from blogs, cafes, and news provided by Naver, Google, and Daum. The collection period was set from April to December 2022 when social distancing was lifted, and 2019 and 2020 were each set as one year and compared and analyzed with 2022. A total of 80 key words were extracted through text mining and centrality analysis was performed using NetDraw. Finally, through the CONCOR, the correlated keywords were clustered into 4. As a result of the study, tourism behavior in 2022 shows tourism recovery before the outbreak of COVID-19, segmentation of travel based on each person's preferred theme, prioritization of each country's corona mitigation policy, and then selecting a tourist destination. It is expected to provide basic data for the development of tourism marketing strategies and tourism products for the newly emerging tourism ecosystem after COVID-19.

Comparative Analysis of Low Fertility Response Policies (Focusing on Unstructured Data on Parental Leave and Child Allowance) (저출산 대응 정책 비교분석 (육아휴직과 아동수당의 비정형 데이터 중심으로))

  • Eun-Young Keum;Do-Hee Kim
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.5
    • /
    • pp.769-778
    • /
    • 2023
  • This study compared and analyzed parental leave and child allowance, two major policies among solutions to the current serious low fertility rate problem, using unstructured data, and sought future directions and implications for related response policies based on this. The collection keywords were "low fertility + parental leave" and "low fertility + child allowance", and data analysis was conducted in the following order: text frequency analysis, centrality analysis, network visualization, and CONCOR analysis. As a result of the analysis, first, parental leave was found to be a realistic and practical policy in response to low fertility rates, as data analysis showed more diverse and systematic discussions than child allowance. Second, in terms of child allowance, data analysis showed that there was a high level of information and interest in the cash grant benefit system, including child allowance, but there were no other unique features or active discussions. As a future improvement plan, both policies need to utilize the existing system. First, parental leave requires improvement in the working environment and blind spots in order to expand the system, and second, child allowance requires a change in the form of payment that deviates from the uniform and biased system. should be sought, and it was proposed to expand the target age.

Automatic scoring of mathematics descriptive assessment using random forest algorithm (랜덤 포레스트 알고리즘을 활용한 수학 서술형 자동 채점)

  • Inyong Choi;Hwa Kyung Kim;In Woo Chung;Min Ho Song
    • The Mathematical Education
    • /
    • v.63 no.2
    • /
    • pp.165-186
    • /
    • 2024
  • Despite the growing attention on artificial intelligence-based automated scoring technology as a support method for the introduction of descriptive items in school environments and large-scale assessments, there is a noticeable lack of foundational research in mathematics compared to other subjects. This study developed an automated scoring model for two descriptive items in first-year middle school mathematics using the Random Forest algorithm, evaluated its performance, and explored ways to enhance this performance. The accuracy of the final models for the two items was found to be between 0.95 to 1.00 and 0.73 to 0.89, respectively, which is relatively high compared to automated scoring models in other subjects. We discovered that the strategic selection of the number of evaluation categories, taking into account the amount of data, is crucial for the effective development and performance of automated scoring models. Additionally, text preprocessing by mathematics education experts proved effective in improving both the performance and interpretability of the automated scoring model. Selecting a vectorization method that matches the characteristics of the items and data was identified as one way to enhance model performance. Furthermore, we confirmed that oversampling is a useful method to supplement performance in situations where practical limitations hinder balanced data collection. To enhance educational utility, further research is needed on how to utilize feature importance derived from the Random Forest-based automated scoring model to generate useful information for teaching and learning, such as feedback. This study is significant as foundational research in the field of mathematics descriptive automatic scoring, and there is a need for various subsequent studies through close collaboration between AI experts and math education experts.

An Analysis of Trends in Natural Language Processing Research in the Field of Science Education (과학교육 분야 자연어 처리 기법의 연구동향 분석)

  • Cheolhong Jeon;Suna Ryu
    • Journal of The Korean Association For Science Education
    • /
    • v.44 no.1
    • /
    • pp.39-55
    • /
    • 2024
  • This study aimed to examine research trends related to Natural Language Processing (NLP) in science education by analyzing 37 domestic and international documents that utilized NLP techniques in the field of science education from 2011 to September 2023. In particular, the study systematically analyzed the content, focusing on the main application areas of NLP techniques in science education, the role of teachers when utilizing NLP techniques, and a comparison of domestic and international perspectives. The analysis results are as follows: Firstly, it was confirmed that NLP techniques are significantly utilized in formative assessment, automatic scoring, literature review and classification, and pattern extraction in science education. Utilizing NLP in formative assessment allows for real-time analysis of students' learning processes and comprehension, reducing the burden on teachers' lessons and providing accurate, effective feedback to students. In automatic scoring, it contributes to the rapid and precise evaluation of students' responses. In literature review and classification using NLP, it helps to effectively analyze the topics and trends of research related to science education and student reports. It also helps to set future research directions. Utilizing NLP techniques in pattern extraction allows for effective analysis of commonalities or patterns in students' thoughts and responses. Secondly, the introduction of NLP techniques in science education has expanded the role of teachers from mere transmitters of knowledge to leaders who support and facilitate students' learning, requiring teachers to continuously develop their expertise. Thirdly, as domestic research on NLP is focused on literature review and classification, it is necessary to create an environment conducive to the easy collection of text data to diversify NLP research in Korea. Based on these analysis results, the study discussed ways to utilize NLP techniques in science education.

Mapping Categories of Heterogeneous Sources Using Text Analytics (텍스트 분석을 통한 이종 매체 카테고리 다중 매핑 방법론)

  • Kim, Dasom;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.193-215
    • /
    • 2016
  • In recent years, the proliferation of diverse social networking services has led users to use many mediums simultaneously depending on their individual purpose and taste. Besides, while collecting information about particular themes, they usually employ various mediums such as social networking services, Internet news, and blogs. However, in terms of management, each document circulated through diverse mediums is placed in different categories on the basis of each source's policy and standards, hindering any attempt to conduct research on a specific category across different kinds of sources. For example, documents containing content on "Application for a foreign travel" can be classified into "Information Technology," "Travel," or "Life and Culture" according to the peculiar standard of each source. Likewise, with different viewpoints of definition and levels of specification for each source, similar categories can be named and structured differently in accordance with each source. To overcome these limitations, this study proposes a plan for conducting category mapping between different sources with various mediums while maintaining the existing category system of the medium as it is. Specifically, by re-classifying individual documents from the viewpoint of diverse sources and storing the result of such a classification as extra attributes, this study proposes a logical layer by which users can search for a specific document from multiple heterogeneous sources with different category names as if they belong to the same source. Besides, by collecting 6,000 articles of news from two Internet news portals, experiments were conducted to compare accuracy among sources, supervised learning and semi-supervised learning, and homogeneous and heterogeneous learning data. It is particularly interesting that in some categories, classifying accuracy of semi-supervised learning using heterogeneous learning data proved to be higher than that of supervised learning and semi-supervised learning, which used homogeneous learning data. This study has the following significances. First, it proposes a logical plan for establishing a system to integrate and manage all the heterogeneous mediums in different classifying systems while maintaining the existing physical classifying system as it is. This study's results particularly exhibit very different classifying accuracies in accordance with the heterogeneity of learning data; this is expected to spur further studies for enhancing the performance of the proposed methodology through the analysis of characteristics by category. In addition, with an increasing demand for search, collection, and analysis of documents from diverse mediums, the scope of the Internet search is not restricted to one medium. However, since each medium has a different categorical structure and name, it is actually very difficult to search for a specific category insofar as encompassing heterogeneous mediums. The proposed methodology is also significant for presenting a plan that enquires into all the documents regarding the standards of the relevant sites' categorical classification when the users select the desired site, while maintaining the existing site's characteristics and structure as it is. This study's proposed methodology needs to be further complemented in the following aspects. First, though only an indirect comparison and evaluation was made on the performance of this proposed methodology, future studies would need to conduct more direct tests on its accuracy. That is, after re-classifying documents of the object source on the basis of the categorical system of the existing source, the extent to which the classification was accurate needs to be verified through evaluation by actual users. In addition, the accuracy in classification needs to be increased by making the methodology more sophisticated. Furthermore, an understanding is required that the characteristics of some categories that showed a rather higher classifying accuracy of heterogeneous semi-supervised learning than that of supervised learning might assist in obtaining heterogeneous documents from diverse mediums and seeking plans that enhance the accuracy of document classification through its usage.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.