• 제목/요약/키워드: Web crawling

검색결과 177건 처리시간 0.026초

Clustering Analysis of Films on Box Office Performance : Based on Web Crawling (영화 흥행과 관련된 영화별 특성에 대한 군집분석 : 웹 크롤링 활용)

  • Lee, Jai-Ill;Chun, Young-Ho;Ha, Chunghun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • 제39권3호
    • /
    • pp.90-99
    • /
    • 2016
  • Forecasting of box office performance after a film release is very important, from the viewpoint of increase profitability by reducing the production cost and the marketing cost. Analysis of psychological factors such as word-of-mouth and expert assessment is essential, but hard to perform due to the difficulties of data collection. Information technology such as web crawling and text mining can help to overcome this situation. For effective text mining, categorization of objects is required. In this perspective, the objective of this study is to provide a framework for classifying films according to their characteristics. Data including psychological factors are collected from Web sites using the web crawling. A clustering analysis is conducted to classify films and a series of one-way ANOVA analysis are conducted to statistically verify the differences of characteristics among groups. The result of the cluster analysis based on the review and revenues shows that the films can be categorized into four distinct groups and the differences of characteristics are statistically significant. The first group is high sales of the box office and the number of clicks on reviews is higher than other groups. The characteristic of the second group is similar with the 1st group, while the length of review is longer and the box office sales are not good. The third group's audiences prefer to documentaries and animations and the number of comments and interests are significantly lower than other groups. The last group prefer to criminal, thriller and suspense genre. Correspondence analysis is also conducted to match the groups and intrinsic characteristics of films such as genre, movie rating and nation.

A Study on Minimizing Infection of Web-based Malware through Distributed & Dynamic Detection Method of Malicious Websites (악성코드 은닉사이트의 분산적, 동적 탐지를 통한 감염피해 최소화 방안 연구)

  • Shin, Hwa-Su;Moon, Jong-Sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • 제21권3호
    • /
    • pp.89-100
    • /
    • 2011
  • As the Internet usage with web browser is more increasing, the web-based malware which is distributed in websites is going to more serious problem than ever. The central type malicious website detection method based on crawling has the problem that the cost of detection is increasing geometrically if the crawling level is lowered more. In this paper, we proposed a security tool based on web browser which can detect the malicious web pages dynamically and support user's safe web browsing by stopping navigation to a certain malicious URL injected to those web pages. By applying these tools with many distributed web browser users, all those users get to participate in malicious website detection and feedback. As a result, we can detect the lower link level of websites distributed and dynamically.

The impact of inter-host links in crawling important pages early

  • Alam, Hijbul;Ha, Jong-Woo;Sim, Kyu-Sun;Lee, Sang-Keun
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 한국정보과학회 2010년도 한국컴퓨터종합학술대회논문집 Vol.37 No.1(C)
    • /
    • pp.118-121
    • /
    • 2010
  • The dynamic nature and exponential growth of the World Wide Web remain crawling important pages early still challenging. State-of-the-art crawl scheduling algorithms require huge running time to prioritize web pages during crawling. In this research, we proposed crawl scheduling algorithms that are not only fast but also download important pages early. The algorithms give high importance to some specific pages those have good linkages such as inlinks from different domains or host. The proposed algorithms were experimented on publically available large datasets. The results of experiments showed that propagating more importance to the inter-host links improves the effectiveness of crawl scheduling than the current state-of-the-art crawl scheduling algorithms.

  • PDF

Design of a Web-based Barter System using Data Crawling (Crawling을 이용한 웹기반의 물물교환 시스템설계)

  • Yoo, Hongseok;Kim, Ji-Won;Hwang, Jong-Wook;Park, Tae-Won;Lee, Jun-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 한국컴퓨터정보학회 2021년도 제64차 하계학술대회논문집 29권2호
    • /
    • pp.527-528
    • /
    • 2021
  • 본 논문에서는 사용자에게 편의성을 제공하며 기존 물물거래 시스템의 단점을 보완한 웹기반의 물물교환 시스템을 제안한다. 대부분 사람들이 중고거래나 필요 없는 물품에 대해 판매를 하는 목적은 자신에게 필요 없는 물건을 처리하고 필요한 물건을 구매하기 위해서이다. 이러한 사용자들의 관점에서 보았을 때, 필요한 물건을 얻기까지의 과정이 장시간 걸린다는 단점이 있으며, 사람들이 필요 없는 물건을 버려 낭비되고 과소비되는 경우도 있다. 이러한 문제를 해결해서 필요 없는 물건을 필요로 하는 사람과 물물교환을 하여 불필요한 소비를 줄이고 필요한 제품을 서로 쉽게 찾고 교환할 수 있도록 사용자에게 편의성을 제공하는 물물교환 시스템을 제안한다.

  • PDF

Asynchronous Web Crawling Algorithm (링크 분석을 통한 비동기 웹 페이지 크롤링 알고리즘)

  • Won, Dong-Hyun;Park, Hyuk-Gyu;Kang, Yun-Jeong;Lee, Min-Hye
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.364-366
    • /
    • 2022
  • The web uses an asynchronous web method to provide various information having different processing speeds together. The asynchronous method has the advantage of being able to respond to other events even before the task is completed, but a typical crawler has difficulty collecting information provided asynchronously by collecting point-of-visit information on a web page. In addition, asynchronous web pages often do not change their web address even if the page content is changed, making it difficult to crawl. In this paper, we propose a web crawling algorithm considering asynchronous page movement by analyzing links in the web. With the proposed algorithm, it was possible to collect dictionary information on TTA terms that provide information asynchronously.

  • PDF

Crawling Algorithm Design for Deep Web Document Collection (심층 웹 문서 수집을 위한 크롤링 알고리즘 설계)

  • Won, Dong-Hyun;Kang, Yun-Jeong;Park, Hyuk-Gyu
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2022년도 추계학술대회
    • /
    • pp.367-369
    • /
    • 2022
  • With the development of web technology, the web provides customized information that meets the needs of users. Information is provided according to the input form and the user's query, and a web service that provides information that is difficult to search with a search engine is called an in-depth web. These deep webs contain more information than surface webs, but it is difficult to collect information with general crawling, which collects information at the time of the visit. The deep web provides users with information on the server by running script languages such as javascript in their browsers. In this paper, we propose an algorithm capable of exploring dynamically changing websites and collecting information by analyzing scripts for deep web collection. In this paper, the script of the bulletin board of the Korea Centers for Disease Control and Prevention was analyzed for experiments.

  • PDF

Big Data Analytics Applied to the Construction Site Accident Factor Analysis

  • KIM, Joon-soo;Lee, Ji-su;KIM, Byung-soo
    • International conference on construction engineering and project management
    • /
    • The 6th International Conference on Construction Engineering and Project Management
    • /
    • pp.678-679
    • /
    • 2015
  • Recently, safety accidents in construction sites are increasing. Accordingly, in this study, development of 'Big-Data Analysis Modeling' can collect articles from last 10 years which came from the Internet News and draw the cause of accidents that happening per season. In order to apply this study, Web Crawling Modeling that can collect 98% of desired information from the internet by using 'Xml', 'tm', "Rcurl' from the library of R, a statistical analysis program has been developed, and Datamining Model, which can draw useful information by using 'Principal Component Analysis' on the result of Work Frequency of 'Textmining.' Through Web Crawling Modeling, 7,384 out of 7,534 Internet News articles that have been posted from the past 10 years regarding "safety Accidents in construction sites", and recognized the characteristics of safety accidents that happening per season. The result showed that accidents caused by abnormal temperature and localized heavy rain, occurred frequently in spring and winter, and accidents caused by violation of safety regulations and breakdown of structures occurred frequently in spring and fall. Plus, the fact that accidents happening from collision of heavy equipment happens constantly every season was acknowledgeable. The result, which has been obtained from "Big-Data Analysis Modeling" corresponds with prior studies. Thus, the study is reliable and able to be applied to not only construction sites but also in the overall industry.

  • PDF

A Method of Selective Crawling for Web Document Using URL Pattern (URL 패턴을 이용한 웹문서의 선택적 자동수집 방안)

  • Jeong, Jun-Yeong;Jang, Mun-Su
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 한국지능시스템학회 2007년도 추계학술대회 학술발표 논문집
    • /
    • pp.41-44
    • /
    • 2007
  • 특정 분야별로 구축되는 온톨로지에 관하여 그 언스턴스를 쉽고 빠르게 구축하기 위해서는 구조화된 문서를 이용하는 것이 효율적이다. 그러나, 일반적인 웹 문서는 모든 분야에 대하여 다양한 형식으로 표현되어 존재하기 때문에, 대상이 되는 구조 문서를 자동으로 수집하기는 쉽지 않다. 본 논문에서는 웹사이트의 URL 패턴을 XML 기반의 스크립트로 정의하여, 필요한 웹 문서만을 지능적으로 수집하는 방안을 제안한다. 제안하는 수집 방안은 구조화된 형태로 정보를 제공하는 사이트에 대해서 매우 빠르고 효율적으로 적용될 수 있다. 본 논문에서는 제안하는 방법을 적용하여 5만개 이상의 웹 문서를 수집하였다.

  • PDF

Web crawler Improvement and Dynamic process Design and Implementation for Effective Data Collection (효과적인 데이터 수집을 위한 웹 크롤러 개선 및 동적 프로세스 설계 및 구현)

  • Wang, Tae-su;Song, JaeBaek;Son, Dayeon;Kim, Minyoung;Choi, Donggyu;Jang, Jongwook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • 제26권11호
    • /
    • pp.1729-1740
    • /
    • 2022
  • Recently, a lot of data has been generated according to the diversity and utilization of information, and the importance of big data analysis to collect, store, process and predict data has increased, and the ability to collect only necessary information is required. More than half of the web space consists of text, and a lot of data is generated through the organic interaction of users. There is a crawling technique as a representative method for collecting text data, but many crawlers are being developed that do not consider web servers or administrators because they focus on methods that can obtain data. In this paper, we design and implement an improved dynamic web crawler that can efficiently fetch data by examining problems that may occur during the crawling process and precautions to be considered. The crawler, which improved the problems of the existing crawler, was designed as a multi-process, and the work time was reduced by 4 times on average.

An Exploratory Study on the Semantic Network Analysis of Food Tourism through the Big Data (빅데이터를 활용한 음식관광관련 의미연결망 분석의 탐색적 적용)

  • Kim, Hak-Seon
    • Culinary science and hospitality research
    • /
    • 제23권4호
    • /
    • pp.22-32
    • /
    • 2017
  • The purpose of this study was to explore awareness of food tourism using big data analysis. For this, this study collected data containing 'food tourism' keywords from google web search, google news, and google scholar during one year from January 1 to December 31, 2016. Data were collected by using SCTM (Smart Crawling & Text Mining), a data collecting and processing program. From those data, degree centrality and eigenvector centrality were analyzed by utilizing packaged NetDraw along with UCINET 6. The result showed that the web visibility of 'core service' and 'social marketing' was high. In addition, the web visibility was also high for destination, such as rural, place, ireland and heritage; 'socioeconomic circumstance' related words, such as economy, region, public, policy, and industry. Convergence of iterated correlations showed 4 clustered named 'core service', 'social marketing', 'destinations' and 'social environment'. It is expected that this diagnosis on food tourism according to changes in international business environment by using these web information will be a foundation of baseline data useful for establishing food tourism marketing strategies.