• Title/Summary/Keyword: Crawling system

Search Result 110, Processing Time 0.026 seconds

An Implementation and Performance Evaluation of Fast Web Crawler with Python

  • Kim, Cheong Ghil
    • Journal of the Semiconductor & Display Technology
    • /
    • v.18 no.3
    • /
    • pp.140-143
    • /
    • 2019
  • The Internet has been expanded constantly and greatly such that we are having vast number of web pages with dynamic changes. Especially, the fast development of wireless communication technology and the wide spread of various smart devices enable information being created at speed and changed anywhere, anytime. In this situation, web crawling, also known as web scraping, which is an organized, automated computer system for systematically navigating web pages residing on the web and for automatically searching and indexing information, has been inevitably used broadly in many fields today. This paper aims to implement a prototype web crawler with Python and to improve the execution speed using threads on multicore CPU. The results of the implementation confirmed the operation with crawling reference web sites and the performance improvement by evaluating the execution speed on the different thread configurations on multicore CPU.

Design of a Web-based Barter System using Data Crawling (Crawling을 이용한 웹기반의 물물교환 시스템설계)

  • Yoo, Hongseok;Kim, Ji-Won;Hwang, Jong-Wook;Park, Tae-Won;Lee, Jun-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2021.07a
    • /
    • pp.527-528
    • /
    • 2021
  • 본 논문에서는 사용자에게 편의성을 제공하며 기존 물물거래 시스템의 단점을 보완한 웹기반의 물물교환 시스템을 제안한다. 대부분 사람들이 중고거래나 필요 없는 물품에 대해 판매를 하는 목적은 자신에게 필요 없는 물건을 처리하고 필요한 물건을 구매하기 위해서이다. 이러한 사용자들의 관점에서 보았을 때, 필요한 물건을 얻기까지의 과정이 장시간 걸린다는 단점이 있으며, 사람들이 필요 없는 물건을 버려 낭비되고 과소비되는 경우도 있다. 이러한 문제를 해결해서 필요 없는 물건을 필요로 하는 사람과 물물교환을 하여 불필요한 소비를 줄이고 필요한 제품을 서로 쉽게 찾고 교환할 수 있도록 사용자에게 편의성을 제공하는 물물교환 시스템을 제안한다.

  • PDF

A Study on the Necessity for the Standardization of Information Classification System about Construction Products

  • Hong, Simhee;Yu, Jung-ho
    • International conference on construction engineering and project management
    • /
    • 2017.10a
    • /
    • pp.121-123
    • /
    • 2017
  • The widespread dissemination of the green building certification system has led to the ongoing development of information management technologies with the aim to effectively utilize construction product information. Among them, a data crawling technology enables to collect the data conveniently and to manage large volumes of construction product information in Korea and overseas. However, without a standardized classification system, it is difficult to efficiently utilize information, and problems such as an additional work for classifying information or information-sharing errors. Therefore, this study suggests to present a necessity for the standardization of the information classification system through expert interviews, and to compare construction product classification systems in Korea and overseas. This study is expected to present a necessity for the effective management of construction product information and the standardization of information-sharing with regard to various construction certifications.

  • PDF

Design and Implemention of Real-time web Crawling distributed monitoring system (실시간 웹 크롤링 분산 모니터링 시스템 설계 및 구현)

  • Kim, Yeong-A;Kim, Gea-Hee;Kim, Hyun-Ju;Kim, Chang-Geun
    • Journal of Convergence for Information Technology
    • /
    • v.9 no.1
    • /
    • pp.45-53
    • /
    • 2019
  • We face problems from excessive information served with websites in this rapidly changing information era. We find little information useful and much useless and spend a lot of time to select information needed. Many websites including search engines use web crawling in order to make data updated. Web crawling is usually used to generate copies of all the pages of visited sites. Search engines index the pages for faster searching. With regard to data collection for wholesale and order information changing in realtime, the keyword-oriented web data collection is not adequate. The alternative for selective collection of web information in realtime has not been suggested. In this paper, we propose a method of collecting information of restricted web sites by using Web crawling distributed monitoring system (R-WCMS) and estimating collection time through detailed analysis of data and storing them in parallel system. Experimental results show that web site information retrieval is applied to the proposed model, reducing the time of 15-17%.

Clustering Analysis of Films on Box Office Performance : Based on Web Crawling (영화 흥행과 관련된 영화별 특성에 대한 군집분석 : 웹 크롤링 활용)

  • Lee, Jai-Ill;Chun, Young-Ho;Ha, Chunghun
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.39 no.3
    • /
    • pp.90-99
    • /
    • 2016
  • Forecasting of box office performance after a film release is very important, from the viewpoint of increase profitability by reducing the production cost and the marketing cost. Analysis of psychological factors such as word-of-mouth and expert assessment is essential, but hard to perform due to the difficulties of data collection. Information technology such as web crawling and text mining can help to overcome this situation. For effective text mining, categorization of objects is required. In this perspective, the objective of this study is to provide a framework for classifying films according to their characteristics. Data including psychological factors are collected from Web sites using the web crawling. A clustering analysis is conducted to classify films and a series of one-way ANOVA analysis are conducted to statistically verify the differences of characteristics among groups. The result of the cluster analysis based on the review and revenues shows that the films can be categorized into four distinct groups and the differences of characteristics are statistically significant. The first group is high sales of the box office and the number of clicks on reviews is higher than other groups. The characteristic of the second group is similar with the 1st group, while the length of review is longer and the box office sales are not good. The third group's audiences prefer to documentaries and animations and the number of comments and interests are significantly lower than other groups. The last group prefer to criminal, thriller and suspense genre. Correspondence analysis is also conducted to match the groups and intrinsic characteristics of films such as genre, movie rating and nation.

Learning Effects of Flipped Learning based on Learning Analytics in SW Coding Education (SW 코딩교육에서의 학습분석기반 플립러닝의 학습효과)

  • Pi, Su-Young
    • Journal of Digital Convergence
    • /
    • v.18 no.11
    • /
    • pp.19-29
    • /
    • 2020
  • The study aims to examine the effectiveness of flipped learning teaching methods by using learning analytics to enable effective programming learning for non-major students. After designing a flipped learning programming class model applied with the ADDIE model, learning-related data of the lecture support system operated by the school was processed with crawling. By providing data processed with crawling through a dashboard so that the instructor can understand it easily, the instructor can design classes more efficiently and provide individually tailored learning based on this. As a result of analysis based on the learning-related data collected through one semester class, it was found that the department, academic year, attendance, assignment submission, and preliminary/review attendance had an effect on academic achievement. As a result of survey analysis, they responded that the individualized feedback of instructors through learning analysis was very helpful in self-directed learning. It is expected that it will serve as an opportunity for instructors to provide a foundation for enhancing teaching activities. In the future, the contents of social network services related to learners' learning will be processed with crawling to analyze learners' learning situations.

A Study on Big Data Processing Technology Based on Open Source for Expansion of LIMS (실험실정보관리시스템의 확장을 위한 오픈 소스 기반의 빅데이터 처리 기술에 관한 연구)

  • Kim, Soon-Gohn
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.2
    • /
    • pp.161-167
    • /
    • 2021
  • Laboratory Information Management System(LIMS) is a centralized database for storing, processing, retrieving, and analyzing laboratory data, and refers to a computer system or system specially designed for laboratories performing inspection, analysis, and testing tasks. In particular, LIMS is equipped with a function to support the operation of the laboratory, and it requires workflow management or data tracking support. In this paper, we collect data on websites and various channels using crawling technology, one of the automated big data collection technologies for the operation of the laboratory. Among the collected test methods and contents, useful test methods and contents useful that the tester can utilize are recommended. In addition, we implement a complementary LIMS platform capable of verifying the collection channel by managing the feedback.

Twitter Crawling System

  • Ganiev, Saydiolim;Nasridinov, Aziz;Byun, Jeong-Yong
    • Journal of Multimedia Information System
    • /
    • v.2 no.3
    • /
    • pp.287-294
    • /
    • 2015
  • We are living in epoch of information when Internet touches all aspects of our lives. Therefore, it provides a plenty of services each of which benefits people in different ways. Electronic Mail (E-mail), File Transfer Protocol (FTP), Voice/Video Communication, Search Engines are bright examples of Internet services. Between them Social Network Services (SNS) continuously gain its popularity over the past years. Most popular SNSs like Facebook, Weibo and Twitter generate millions of data every minute. Twitter is one of SNS which allows its users post short instant messages. They, 100 million, posted 340 million tweets per day (2012)[1]. Often big amount of data contains lots of noisy data which can be defined as uninteresting and unclassifiable data. However, researchers can take advantage of such huge information in order to analyze and extract meaningful and interesting features. The way to collect SNS data as well as tweets is handled by crawlers. Twitter crawler has recently emerged as a great tool to crawl Twitter data as well as tweets. In this project, we develop Twitter Crawler system which enables us to extract Twitter data. We implemented our system in Java language along with MySQL. We use Twitter4J which is a java library for communicating with Twitter API. The application, first, connects to Twitter API, then retrieves tweets, and stores them into database. We also develop crawling strategies to efficiently extract tweets in terms of time and amount.

Improving the quality of Search engine by using the Intelligent agent technolo

  • Nauyen, Ha-Nam;Choi, Gyoo-Seok;Park, Jong-Jin;Chi, Sung-Do
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.12
    • /
    • pp.1093-1102
    • /
    • 2003
  • The dynamic nature of the World Wide Web challenges Search engines to find relevant and recent pages. Obtaining important pages rapidly can be very useful when a crawler cannot visit the entire Web in a reasonable amount of time. In this paper we study way spiders that should visit the URLs in order to obtain more “important” pages first. We define and apply several metrics, ranking formula for improving crawling results. The comparison between our result and Breadth-first Search (BFS) method shows the efficiency of our experiment system.

  • PDF

Classify Layer Design for Navigation Control of Line-Crawling Robot : A Rough Neurocomputing Approach

  • Ahn, Taechon;Peters, James F.;Borkowski, Maciey
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.68.1-68
    • /
    • 2002
  • This paper considers a rough neurocomputing approach to the design of the classify layer of a Brooks architecture for a robot control system. The Paradigm for neurocomputing that has its roots in rough set theory, and works well in cases where there is uncertainty about the values of measurements used to make decisions. In the case of the line-crawling robot (LCR) described in this paper, rough neurocomputing is used to classify sometimes noisy signals from sensors. The LCR is a robot designed to crawl along high-voltage transmission lines where noisy sensor signals are common because of the electromagnetic field surrounding conductors. In rough neurocomputing, training a network of neurons...

  • PDF