• Title/Summary/Keyword: Web crawling

Search Result 177, Processing Time 0.023 seconds

An Automated Topic Specific Web Crawler Calculating Degree of Relevance (연관도를 계산하는 자동화된 주제 기반 웹 수집기)

  • Seo Hae-Sung;Choi Young-Soo;Choi Kyung-Hee;Jung Gi-Hyun;Noh Sang-Uk
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.155-167
    • /
    • 2006
  • It is desirable if users surfing on the Internet could find Web pages related to their interests as closely as possible. Toward this ends, this paper presents a topic specific Web crawler computing the degree of relevance. collecting a cluster of pages given a specific topic, and refining the preliminary set of related web pages using term frequency/document frequency, entropy, and compiled rules. In the experiments, we tested our topic specific crawler in terms of the accuracy of its classification, crawling efficiency, and crawling consistency. First, the classification accuracy using the set of rules compiled by CN2 was the best, among those of C4.5 and back propagation learning algorithms. Second, we measured the classification efficiency to determine the best threshold value affecting the degree of relevance. In the third experiment, the consistency of our topic specific crawler was measured in terms of the number of the resulting URLs overlapped with different starting URLs. The experimental results imply that our topic specific crawler was fairly consistent, regardless of the starting URLs randomly chosen.

  • PDF

Implementation of a Parallel Web Crawler for the Odysseus Large-Scale Search Engine (오디세우스 대용량 검색 엔진을 위한 병렬 웹 크롤러의 구현)

  • Shin, Eun-Jeong;Kim, Yi-Reun;Heo, Jun-Seok;Whang, Kyu-Young
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.14 no.6
    • /
    • pp.567-581
    • /
    • 2008
  • As the size of the web is growing explosively, search engines are becoming increasingly important as the primary means to retrieve information from the Internet. A search engine periodically downloads web pages and stores them in the database to provide readers with up-to-date search results. The web crawler is a program that downloads and stores web pages for this purpose. A large-scale search engines uses a parallel web crawler to retrieve the collection of web pages maximizing the download rate. However, the service architecture or experimental analysis of parallel web crawlers has not been fully discussed in the literature. In this paper, we propose an architecture of the parallel web crawler and discuss implementation issues in detail. The proposed parallel web crawler is based on the coordinator/agent model using multiple machines to download web pages in parallel. The coordinator/agent model consists of multiple agent machines to collect web pages and a single coordinator machine to manage them. The parallel web crawler consists of three components: a crawling module for collecting web pages, a converting module for transforming the web pages into a database-friendly format, a ranking module for rating web pages based on their relative importance. We explain each component of the parallel web crawler and implementation methods in detail. Finally, we conduct extensive experiments to analyze the effectiveness of the parallel web crawler. The experimental results clarify the merit of our architecture in that the proposed parallel web crawler is scalable to the number of web pages to crawl and the number of machines used.

Efficient Internet Information Extraction Using Hyperlink Structure and Fitness of Hypertext Document (웹의 연결구조와 웹문서의 적합도를 이용한 효율적인 인터넷 정보추출)

  • Hwang Insoo
    • Journal of Information Technology Applications and Management
    • /
    • v.11 no.4
    • /
    • pp.49-60
    • /
    • 2004
  • While the World-Wide Web offers an incredibly rich base of information, organized as a hypertext it does not provide a uniform and efficient way to retrieve specific information. Therefore, it is needed to develop an efficient web crawler for gathering useful information in acceptable amount of time. In this paper, we studied the order in which the web crawler visit URLs to rapidly obtain more important web pages. We also developed an internet agent for efficient web crawling using hyperlink structure and fitness of hypertext documents. As a result of experiment on a website. it is shown that proposed agent outperforms other web crawlers using BackLink and PageRank algorithm.

  • PDF

A Study on the Crawling and Classification Strategy for Local Website (로컬 웹사이트의 탐색전략과 웹사이트 유형분석에 관한 연구)

  • Hwang In-Soo
    • Journal of Information Technology Applications and Management
    • /
    • v.13 no.2
    • /
    • pp.55-65
    • /
    • 2006
  • Since the World-Wide Web (WWW) has become a major channel for information delivery, information overload also has become a serious problem to the Internet users. Therefore, effective information searching is critical to the success of Internet services. We present an integrated search engine for searching relevant web pages on the WWW in a certain Internet domain. It supports a local search on the web sites. The spider obtains all of the web pages from the web sites through web links. It operates autonomously without any human supervision. We developed state transition diagram to control navigation and analyze link structure of each web site. We have implemented an integrated local search engine and it shows that a higher satisfaction is obtained. From the user evaluation, we also find that higher precision is obtained.

  • PDF

Web Crawling and PageRank Calculation for Community-Limited Search (커뮤니티 제한 검색을 위한 웹 크롤링 및 PageRank 계산)

  • Kim Gye-Jeong;Kim Min-Soo;Kim Yi-Reun;Whang Kyu-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07b
    • /
    • pp.1-3
    • /
    • 2005
  • 최근 웹 검색 분야에서는 검색 질을 높이기 위한 기법들이 많이 연구되어 왔으며, 대표적인 연구로는 제한 검색, focused crawling, 웹 클러스터링 등이 있다. 그러나 제한 검색은 검색 범위를 의미적으로 관련된 사이트들로 제한할 수 없으며, focused crawling은 질의 시점에 클러스터링하기 때문에 질의 처리 시간이 오래 걸리고, 웹 클러스터링은 많은 웹 페이지들을 대상으로 클러스터링하기 위한 오버헤드가 크다. 본 논문에서는 검색 범위를 특정 커뮤니티로 제한하여 검색 하는 커뮤니티 제한 검색과 커뮤니티를 구하는 방법으로 cluster crawler를 제안하여 이러한 문제점을 해결한다. 또한, 커뮤니티를 이용하여 PageRank를 2단계로 계산하는 방법을 제안한다. 제안된 방법은 첫 번째 과정에서 커뮤니티 단위로 지역적으로 PageRank를 계산한 후, 두 번째 과정에서 이를 바탕으로 전역적으로 PageRank론 계산한다. 제안된 방법은 Wang에 의해 제안된 방법에 비해 PageRank 근사치의 오차를 $59\%$ 정도로 줄일 수 있다.

  • PDF

Word Frequency-Based Big Data Analysis for the Annals of the Joseon Dynasty (조선왕조실록 분석을 위한 단어 빈도수 기반 빅 데이터 분석)

  • Bong, Young-Il;Lee, Choong-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.707-709
    • /
    • 2022
  • Annals of the Joseon Dynasty is a librarian that compiled the history of the Joseon Dynasty for 472 years, from Taejo to Cheoljong. The Annals of the Joseon Dynasty, National Treasure No. 151, are important documented heritages, but they are difficult to analyze due to their vast content. Therefore, rather than analyzing all the contents of the Annals of the Joseon Dynasty, it is necessary to extract and analyze important words. In this paper, we propose a method of extracting words from the main body of the Annals of the Joseon Dynasty through web crawling and analyzing the translated texts of the Annals of the Joseon Dynasty based on the data sorted according to the frequency of words. In this study, only the part of King Sejong of the Annals of the Joseon Dynasty was extracted and the importance was analyzed according to the frequency of words.

  • PDF

Early Detection Assistance System for Rare Diseases based on Patient's Symptom Information (환자 증상정보 기반 희귀질환 조기 발견 보조시스템)

  • Jae-Min Choi;Sun-Yong Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.2
    • /
    • pp.373-378
    • /
    • 2023
  • Untypical symptoms and lack of diagnostic records make it difficult for even medical specialists to detect rare diseases. Thus, it takes a lot of time and money from the onset of symptoms to an accurate diagnosis, which seriously results in physical, mental, and economic pressure on patients. In this paper, we propose and implement an early detection assistance system for rare diseases using web crawling and text mining, which can suggest the names of suspected rare diseases so that medical staffs can easily recall the disease names and make a final diagnosis of the rare diseases.

Conflict Analysis in Construction Project with Unstructured Data: A Case Study of Jeju Naval Base Project in South Korea

  • Baek, Seungwon;Han, Seung Heon;Lee, Changjun;Jang, Woosik;Ock, Jong Ho
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.291-296
    • /
    • 2017
  • Infrastructure development as national project suffers from social conflict which is one of main risk to be managed. Social conflicts have a negative impact on not only the social integration but also the national economy as they require enormous social costs to be solved. Against this backdrop, this study analyzes social conflict using articles published by online news media based on web-crawling and natural language processing (NLP) techniques. As an illustrative case, the Jeju Naval Base (JNB) project which is one of representative conflict case in South Korea is analyzed. Total of 21,788 articles and representative keywords are identified annually. Additionally, comparative analysis is conducted between the extracted keywords and actual events occurred during the project. The authors explain actual events in the JNB project based on the extracted words by the year. This study contributes to analyze social conflict and to extract meaningful information from unstructured data.

  • PDF

Designing Cost Effective Open Source System for Bigdata Analysis (빅데이터 분석을 위한 비용효과적 오픈 소스 시스템 설계)

  • Lee, Jong-Hwa;Lee, Hyun-Kyu
    • Knowledge Management Research
    • /
    • v.19 no.1
    • /
    • pp.119-132
    • /
    • 2018
  • Many advanced products and services are emerging in the market thanks to data-based technologies such as Internet (IoT), Big Data, and AI. The construction of a system for data processing under the IoT network environment is not simple in configuration, and has a lot of restrictions due to a high cost for constructing a high performance server environment. Therefore, in this paper, we will design a development environment for large data analysis computing platform using open source with low cost and practicality. Therefore, this study intends to implement a big data processing system using Raspberry Pi, an ultra-small PC environment, and open source API. This big data processing system includes building a portable server system, building a web server for web mining, developing Python IDE classes for crawling, and developing R Libraries for NLP and visualization. Through this research, we will develop a web environment that can control real-time data collection and analysis of web media in a mobile environment and present it as a curriculum for non-IT specialists.

Design and Implementation of Typing Practice Application for Learning Using Web Contents (웹 콘텐츠를 활용한 학습용 타자 연습 어플리케이션의 설계와 구현)

  • Kim, Chaewon;Hwang, Soyoung
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.12
    • /
    • pp.1663-1672
    • /
    • 2021
  • There are various typing practice applications. In addition, research cases on learning applications that support typing practice have been reported. These services are usually provided in a way that utilizes their own built-in text. Learners collect various contents through web services and use them a lot for learning. Therefore, this paper proposes a learning application to increase the learning effect by collecting vast amounts of web content and applying it to typing practice. The proposed application is implemented using Tkinter, a GUI module of Python. BeautifulSoup module of Python is used to extract information from the web. In order to process the extracted data, the NLTK module, which is an English data preprocessor, and the KoNLPy module, which is a Korean language processing module, are used. The operation of the proposed function is verified in the implementation and experimental results.