• Title/Summary/Keyword: Web documents

Search Result 831, Processing Time 0.02 seconds

Numerical Formula and Verification of Web Robot for Collection Speedup of Web Documents

  • Kim Weon;Kim Young-Ki;Chin Yong-Ok
    • Journal of Internet Computing and Services
    • /
    • v.5 no.6
    • /
    • pp.1-10
    • /
    • 2004
  • A web robot is a software that has abilities of tracking and collecting web documents on the Internet(l), The performance scalability of recent web robots reached the limit CIS the number of web documents on the internet has increased sharply as the rapid growth of the Internet continues, Accordingly, it is strongly demanded to study on the performance scalability in searching and collecting documents on the web. 'Design of web robot based on Multi-Agent to speed up documents collection ' rather than 'Sequentially executing Web Robot based on the existing Fork-Join method' and the results of analysis on its performance scalability is presented in the thesis, For collection speedup, a Multi-Agent based web robot performs the independent process for inactive URL ('Dead-links' URL), which is caused by overloaded web documents, temporary network or web-server disturbance, after dividing them into each agent. The agents consist of four component; Loader, Extractor, Active URL Scanner and inactive URL Scanner. The thesis models a Multi-Agent based web robot based on 'Amdahl's Law' to speed up documents collection, introduces a numerical formula for collection speedup, and verifies its performance improvement by comparing data from the formula with data from experiments based on the formula. Moreover, 'Dynamic URL Partition algorithm' is introduced and realized to minimize the workload of the web server by maximizing a interval of the web server which can be a collection target.

  • PDF

An Implementation and Design Web-Based Instruction-Learning System Using Web Agent (웹 에이전트를 이용한 웹기반 교수-학습 시스템의 설계 및 개발)

  • Kim, Kap-Su;Lee, Keon-Min
    • Journal of The Korean Association of Information Education
    • /
    • v.5 no.1
    • /
    • pp.69-78
    • /
    • 2001
  • Recently, the current trend for computer based learning is moving from CAI environment to WBI environment. Most web documents for WBI learning are collected by aid of search engine. Instructors use those documents as learning materials after they evaluate availability of retrieved web documents. But, this method has the following problems. First, we search repeatedly the web documents selected by instructor. Second, there is a need for another course of instruction design in order to suggest the web documents for learner. Third, it is very difficult to analyze for relevance between the web documents and test results. In this work, we suggest WAILS(Web Agent Instruction Learning System) that retrieves web documents for WBI learning and guides learning course for learners. WAILS collects web documents for WBI learning by aid of web agent. Then, instructors can evaluate them and suggest to learners by using instruction-learning generating machine. Instructors retrieve web documents and the instruction-learning design at the same time. This can facilitate WBI learning.

  • PDF

Intelligent Web Crawler for Supporting Big Data Analysis Services (빅데이터 분석 서비스 지원을 위한 지능형 웹 크롤러)

  • Seo, Dongmin;Jung, Hanmin
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.575-584
    • /
    • 2013
  • Data types used for big-data analysis are very widely, such as news, blog, SNS, papers, patents, sensed data, and etc. Particularly, the utilization of web documents offering reliable data in real time is increasing gradually. And web crawlers that collect web documents automatically have grown in importance because big-data is being used in many different fields and web data are growing exponentially every year. However, existing web crawlers can't collect whole web documents in a web site because existing web crawlers collect web documents with only URLs included in web documents collected in some web sites. Also, existing web crawlers can collect web documents collected by other web crawlers already because information about web documents collected in each web crawler isn't efficiently managed between web crawlers. Therefore, this paper proposed a distributed web crawler. To resolve the problems of existing web crawler, the proposed web crawler collects web documents by RSS of each web site and Google search API. And the web crawler provides fast crawling performance by a client-server model based on RMI and NIO that minimize network traffic. Furthermore, the web crawler extracts core content from a web document by a keyword similarity comparison on tags included in a web documents. Finally, to verify the superiority of our web crawler, we compare our web crawler with existing web crawlers in various experiments.

Automatic Classification of Web documents According to their Styles (스타일에 따른 웹 문서의 자동 분류)

  • Lee, Kong-Joo;Lim, Chul-Su;Kim, Jae-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.11B no.5
    • /
    • pp.555-562
    • /
    • 2004
  • A genre or a style is another view of documents different from a subject or a topic. The style is also a criterion to classify the documents. There have been several studies on detecting a style of textual documents. However, only a few of them dealt with web documents. In this paper we suggest sets of features to detect styles of web documents. Web documents are different from textual documents in that Dey contain URL and HTML tags within the pages. We introduce the features specific to web documents, which are extracted from URL and HTML tags. Experimental results enable us to evaluate their characteristics and performances.

Document Classification Model Using Web Documents for Balancing Training Corpus Size per Category

  • Park, So-Young;Chang, Juno;Kihl, Taesuk
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.4
    • /
    • pp.268-273
    • /
    • 2013
  • In this paper, we propose a document classification model using Web documents as a part of the training corpus in order to resolve the imbalance of the training corpus size per category. For the purpose of retrieving the Web documents closely related to each category, the proposed document classification model calculates the matching score between word features and each category, and generates a Web search query by combining the higher-ranked word features and the category title. Then, the proposed document classification model sends each combined query to the open application programming interface of the Web search engine, and receives the snippet results retrieved from the Web search engine. Finally, the proposed document classification model adds these snippet results as Web documents to the training corpus. Experimental results show that the method that considers the balance of the training corpus size per category exhibits better performance in some categories with small training sets.

Web Document Clustering based on Graph using Hyperlinks (하이퍼링크를 이용한 그래프 기반의 웹 문서 클러스터링)

  • Lee, Joon;Kang, Jin-Beom;Choi, Joong-Min
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.590-595
    • /
    • 2009
  • With respect to the exponential increment of web documents on the internet, it is important how to improve performance of clustering method for web documents. Web document clustering techniques can offer accurate information and fast information retrieval by clustering web documents through semantic relationship. The clustering method based on mesh-graph provides high recall by calculating similarity for documents, but it requires high computation cost. This paper proposes a clustering method using hyperlinks which is structural feature of web documents in order to keep effectiveness and reduce computation cost.

  • PDF

An Improved Approach to Ranking Web Documents

  • Gupta, Pooja;Singh, Sandeep K.;Yadav, Divakar;Sharma, A.K.
    • Journal of Information Processing Systems
    • /
    • v.9 no.2
    • /
    • pp.217-236
    • /
    • 2013
  • Ranking thousands of web documents so that they are matched in response to a user query is really a challenging task. For this purpose, search engines use different ranking mechanisms on apparently related resultant web documents to decide the order in which documents should be displayed. Existing ranking mechanisms decide on the order of a web page based on the amount and popularity of the links pointed to and emerging from it. Sometime search engines result in placing less relevant documents in the top positions in response to a user query. There is a strong need to improve the ranking strategy. In this paper, a novel ranking mechanism is being proposed to rank the web documents that consider both the HTML structure of a page and the contextual senses of keywords that are present within it and its back-links. The approach has been tested on data sets of URLs and on their back-links in relation to different topics. The experimental result shows that the overall search results, in response to user queries, are improved. The ordering of the links that have been obtained is compared with the ordering that has been done by using the page rank score. The results obtained thereafter shows that the proposed mechanism contextually puts more related web pages in the top order, as compared to the page rank score.

A Method of Efficient Web Crawling Using URL Pattern Scripts (URL 패턴 스크립트를 이용한 효율적인 웹문서 수집 방안)

  • Chang, Moon-Soo;Jung, June-Young
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.6
    • /
    • pp.849-854
    • /
    • 2007
  • It is difficult that we collect only target documents from the Innumerable Web documents. One of solution to the problem is that we select target documents on the Web site which services many documents of target domain. In this paper, we will propose an intelligent crawling method collecting needed documents based on URL pattern script defined by XML. Proposed crawling method will efficiently apply to the sites which service structuralized information of a piece with database. In this paper, we collected 50 thousand Web documents using our crawling method.

Detecting Harmful Web Documents Based on Web Document Analyses (웹 문서 분석에 근거한 유해 웹 문서 검출)

  • Kim, Kwang-Hyun;Choi, Joung-Mi;Lee, Joon-Ho
    • The KIPS Transactions:PartD
    • /
    • v.12D no.5 s.101
    • /
    • pp.683-688
    • /
    • 2005
  • A huge amount of web documents, which are published on the Internet, provide to users not only helpful information but also harmful information such as pornography. In this paper we propose a method to detect the harmful web documents effectively. We first analyze harmful web documents, and extract factors to determine whether a given web document is harmful. Detail criteria are also described to assign a harmfulness score to each factor. Then the harmfulness score of a web document is computed by adding the harmfulness scores of all factors. If the harmfulness score of a web document is greater than a given threshold, the web document is detected as harmful. It is expected that this study could contribute to the protection of users from harmful web documents on the Internet.

Dynamic Generation of SMIL based Multimedia Documents on the Web (웹에서 SMIL 기반 멀티미디어 문서의 동적 생성)

  • 김경덕
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.5
    • /
    • pp.439-445
    • /
    • 2001
  • In this paper, we suggest a method for dynamic generation of SMIL documents by user profiles on the web. Generated multimedia documents are based on the SMIL (Synchronized Multimedia Integration Language) that are recommended by the W3C. The method generates automatically XSLT documents according to user profiles. SMIL documents are produced on real-time by integration of the XSLT documents and the XML documents that are made already. Most of conventional web-based documents are based on the HTML that is difficult to support reusability of documents are relation among multimedia abject. However, the suggested method is based on the XML, and so it supports reusability of documents and produces efficiently various SMIL-based multimedia documents. Application for the suggested method are as follows; Electronic commerce, tele-lecture, a web-based document editing, etc.

  • PDF