• Title/Summary/Keyword: News-crawling

Search Result 36, Processing Time 0.024 seconds

An Analysis on Anti-Drone Technology Trends of Domestic Companies Using News Crawling on the Web (뉴스 기사의 크롤링을 통한 국내 기업의 안티 드론에 사용되는 기술 현황 분석)

  • Kim, Kyuseok
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.6
    • /
    • pp.458-464
    • /
    • 2020
  • Drones are being spreaded for the purposes such as construction, logistics, scientific research, recording, toy and so on. However, anti-drone related technologies which make the opposite drones neutralized are also widely being researched and developed because some drones are being used for crime or terror. The range of anti-drone related technologies can be divided into detection, identification and neutralization. The drone neutralization methods are divided into Soft-kill one which blocks the detected drones using jamming and Hard-kill one which destroys the detected ones physically. In this paper, Google and Naver domestic news articles related to anti-drone were gathered. Analyzing the domestic news articles, 8 of related technologies using RF, GNSS, Radar and so on were found. Regarding as this, the general features and usage status of those technologies were described and those on anti-drone for each company and agency were gathered and analyzed.

Big Data Analytics Applied to the Construction Site Accident Factor Analysis

  • KIM, Joon-soo;Lee, Ji-su;KIM, Byung-soo
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.678-679
    • /
    • 2015
  • Recently, safety accidents in construction sites are increasing. Accordingly, in this study, development of 'Big-Data Analysis Modeling' can collect articles from last 10 years which came from the Internet News and draw the cause of accidents that happening per season. In order to apply this study, Web Crawling Modeling that can collect 98% of desired information from the internet by using 'Xml', 'tm', "Rcurl' from the library of R, a statistical analysis program has been developed, and Datamining Model, which can draw useful information by using 'Principal Component Analysis' on the result of Work Frequency of 'Textmining.' Through Web Crawling Modeling, 7,384 out of 7,534 Internet News articles that have been posted from the past 10 years regarding "safety Accidents in construction sites", and recognized the characteristics of safety accidents that happening per season. The result showed that accidents caused by abnormal temperature and localized heavy rain, occurred frequently in spring and winter, and accidents caused by violation of safety regulations and breakdown of structures occurred frequently in spring and fall. Plus, the fact that accidents happening from collision of heavy equipment happens constantly every season was acknowledgeable. The result, which has been obtained from "Big-Data Analysis Modeling" corresponds with prior studies. Thus, the study is reliable and able to be applied to not only construction sites but also in the overall industry.

  • PDF

News Abusing Inference Model Using Web Crawling (웹크롤링을 활용한 뉴스 어뷰징 추론 모델)

  • Chung, Kyoung-Rock;Park, Koo-Rack;Chung, Young-Suk;Nam, Ki-Bok
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2018.07a
    • /
    • pp.175-176
    • /
    • 2018
  • 기존 신문이나 티브이가 아닌 온라인과 모바일로 뉴스를 보는 사람이 더 많아지면서, 포털 사이트 뉴스난에 다른 언론사의 기사보다 더 많이 노출되기 위한 경쟁의 심화로 뉴스 어뷰징은 심각한 사회 문제로까지 대두되었다. 본 논문은 온라인상에서 생성, 유통되는 많은 뉴스 중에서 이용자의 시간을 낭비하고 양질의 정보를 찾기 힘들게 하는 뉴스 어뷰징을 판단하는 모델을 제안한다. 제안된 모델은 크롤링 기술을 사용하여 뉴스의 제목과 내용을 가져온 후 인공지능 기술을 이용한 유사도 검사로 기사의 어뷰징 여부를 판단하여 양질의 뉴스 정보를 사용자에게 제공될 수 있다.

  • PDF

Understanding the Food Hygiene of Cruise through the Big Data Analytics using the Web Crawling and Text Mining

  • Shuting, Tao;Kang, Byongnam;Kim, Hak-Seon
    • Culinary science and hospitality research
    • /
    • v.24 no.2
    • /
    • pp.34-43
    • /
    • 2018
  • The objective of this study was to acquire a general and text-based awareness and recognition of cruise food hygiene through big data analytics. For the purpose, this study collected data with conducting the keyword "food hygiene, cruise" on the web pages and news on Google, during October 1st, 2015 to October 1st, 2017 (two years). The data collection was processed by SCTM which is a data collecting and processing program and eventually, 899 kb, approximately 20,000 words were collected. For the data analysis, UCINET 6.0 packaged with visualization tool-Netdraw was utilized. As a result of the data analysis, the words such as jobs, news, showed the high frequency while the results of centrality (Freeman's degree centrality and Eigenvector centrality) and proximity indicated the distinct rank with the frequency. Meanwhile, as for the result of CONCOR analysis, 4 segmentations were created as "food hygiene group", "person group", "location related group" and "brand group". The diagnosis of this study for the food hygiene in cruise industry through big data is expected to provide instrumental implications both for academia research and empirical application.

Method of Related Document Recommendation with Similarity and Weight of Keyword (키워드의 유사도와 가중치를 적용한 연관 문서 추천 방법)

  • Lim, Myung Jin;Kim, Jae Hyun;Shin, Ju Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.11
    • /
    • pp.1313-1323
    • /
    • 2019
  • With the development of the Internet and the increase of smart phones, various services considering user convenience are increasing, so that users can check news in real time anytime and anywhere. However, online news is categorized by media and category, and it provides only a few related search terms, making it difficult to find related news related to keywords. In order to solve this problem, we propose a method to recommend related documents more accurately by applying Doc2Vec similarity to the specific keywords of news articles and weighting the title and contents of news articles. We collect news articles from Naver politics category by web crawling in Java environment, preprocess them, extract topics using LDA modeling, and find similarities using Doc2Vec. To supplement Doc2Vec, we apply TF-IDF to obtain TC(Title Contents) weights for the title and contents of news articles. Then we combine Doc2Vec similarity and TC weight to generate TC weight-similarity and evaluate the similarity between words using PMI technique to confirm the keyword association.

Pilot Experiment for Named Entity Recognition of Construction-related Organizations from Unstructured Text Data

  • Baek, Seungwon;Han, Seung H.;Jung, Wooyong;Kim, Yuri
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.847-854
    • /
    • 2022
  • The aim of this study is to develop a Named Entity Recognition (NER) model to automatically identify construction-related organizations from news articles. This study collected news articles using web crawling technique and construction-related organizations were labeled within a total of 1,000 news articles. The Bidirectional Encoder Representations from Transformers (BERT) model was used to recognize clients, constructors, consultants, engineers, and others. As a pilot experiment of this study, the best average F1 score of NER was 0.692. The result of this study is expected to contribute to the establishment of international business strategies by collecting timely information and analyzing it automatically.

  • PDF

Text-Mining Analyses of News Articles on Schizophrenia (조현병 관련 주요 일간지 기사에 대한 텍스트 마이닝 분석)

  • Nam, Hee Jung;Ryu, Seunghyong
    • Korean Journal of Schizophrenia Research
    • /
    • v.23 no.2
    • /
    • pp.58-64
    • /
    • 2020
  • Objectives: In this study, we conducted an exploratory analysis of the current media trends on schizophrenia using text-mining methods. Methods: First, web-crawling techniques extracted text data from 575 news articles in 10 major newspapers between 2018 and 2019, which were selected by searching "schizophrenia" in the Naver News. We had developed document-term matrix (DTM) and/or term-document matrix (TDM) through pre-processing techniques. Through the use of DTM and TDM, frequency analysis, co-occurrence network analysis, and topic model analysis were conducted. Results: Frequency analysis showed that keywords such as "police," "mental illness," "admission," "patient," "crime," "apartment," "lethal weapon," "treatment," "Jinju," and "residents" were frequently mentioned in news articles on schizophrenia. Within the article text, many of these keywords were highly correlated with the term "schizophrenia" and were also interconnected with each other in the co-occurrence network. The latent Dirichlet allocation model presented 10 topics comprising a combination of keywords: "police-Jinju," "hospital-admission," "research-finding," "care-center," "schizophrenia-symptom," "society-issue," "family-mind," "woman-school," and "disabled-facilities." Conclusion: The results of the present study highlight that in recent years, the media has been reporting violence in patients with schizophrenia, thereby raising an important issue of hospitalization and community management of patients with schizophrenia.

Development of a Fake News Detection Model Using Text Mining and Deep Learning Algorithms (텍스트 마이닝과 딥러닝 알고리즘을 이용한 가짜 뉴스 탐지 모델 개발)

  • Dong-Hoon Lim;Gunwoo Kim;Keunho Choi
    • Information Systems Review
    • /
    • v.23 no.4
    • /
    • pp.127-146
    • /
    • 2021
  • Fake news isexpanded and reproduced rapidly regardless of their authenticity by the characteristics of modern society, called the information age. Assuming that 1% of all news are fake news, the amount of economic costs is reported to about 30 trillion Korean won. This shows that the fake news isvery important social and economic issue. Therefore, this study aims to develop an automated detection model to quickly and accurately verify the authenticity of the news. To this end, this study crawled the news data whose authenticity is verified, and developed fake news prediction models using word embedding (Word2Vec, Fasttext) and deep learning algorithms (LSTM, BiLSTM). Experimental results show that the prediction model using BiLSTM with Word2Vec achieved the best accuracy of 84%.

Analysis of Shipping and Logistics News Articles using Topic Modeling (토픽모델링을 활용한 해운물류 뉴스 분석)

  • Hee-Young Yoon;Il-Youp Kwak
    • Korea Trade Review
    • /
    • v.46 no.4
    • /
    • pp.61-76
    • /
    • 2021
  • This study focuses on three logistics-related news (Logistics Newspaper, Korea Shipping Gadget, and Korea Shipping Newspaper) in order to present changes in logistics issues, centering on Corona 19, which has recently had the greatest impact in the world. For data collection, two-year news articles in 2019 and 2020 (title, article, content, date, article classification, article URL) were collected through web crawling (using Python's BeautifulSoup, requests module) on the homepages of three representative logistics-related media companies. As for the data analysis methods, fundamental statistical analysis, Latent Dirichlet Allocation (LDA) for topic modeling, and Scattertext were performed. The analysis results were as follows. First, among the three news media related to logistics, the Korea Shipping Newspaper was carrying out the most active media activities. Second, through topic modeling with LDA, eight logistics-related topics were identified, and keywords and significant issues of each topic were presented. Third, the keywords were visually expressed through Scattertext. This is the first study to present changes in the logistics field, focusing on articles from representative logistics-related media in 2019 and 2020. In particular, 2019 and 2020 can be divided into before and after the outbreak of Corona 19, which has had a great impact not only on the logistics field but also on our lives as a whole. For future work, a multi-faceted approach is required, such as comparative studies of logistics issues between countries or presenting implications based on long-term time-series articles.

A Study for Conflict in Public Construction Projects Based on Online News (온라인 뉴스 기반 공공건설사업 갈등지수 산정에 관한 기초연구)

  • Baek, Seungwon;Han, Seung Heon;Yun, Sungmin;Lim, Jonglok;Nam, Jihyun
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2021.05a
    • /
    • pp.277-278
    • /
    • 2021
  • Conflict in public construction projects has increased for the last decades. It not only entails enormous social and economic costs but also makes stakeholders suffer from unnecessary expense and time waste. This study defines the the conflict index for public construction projects based on news data, and calculates conflict index for representative past and current public construction projects that has been deepened conflicts at the national level. The result indicates that the major conflict issue of the 2nd Jeju Airport Project are the environment and location whereas that of the Gaduk New Airport Project are the safety, location and necessity. This approach is expected to enable construction project managers to manage conflicts quantitatively based on comparing with past cases.

  • PDF