• Title/Summary/Keyword: collection time

Search Result 2,306, Processing Time 0.028 seconds

A Study on the Meaning of Story Collection in Popular Song Archives (대중가요 아카이브에서 '이야기'컬렉션의 의미연구)

  • Jin, Hyun Joo;Yim, Jin Hee
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.16 no.3
    • /
    • pp.69-97
    • /
    • 2016
  • For a long time, popular songs have been our source of happiness. They go along with the flow of the generation, allowing them to acquire a significant meaning. Popular songs thus reflect the contemporary society. However, in the study of popular song archives, there has been an interest only in the information of the songs sung by a singer, the type of songs, and the type of singers. Thus, existing popular song archives were not able to reflect the characteristics of the society of the generation when the songs were popular. Therefore, this paper searched for ways by which to archive not only popular songs but also the characteristics of the society during the time when these songs were famous, and suggested a story collection through the methods. These stories reflect the lives of the people and the characteristics of the society during the periods when the songs were popular. Furthermore, these stories go beyond popular songs, making it possible for us to look dimensionally on the society we are affiliated in.

Crawling algorithm design and experiment for automatic deep web document collection (심층 웹 문서 자동 수집을 위한 크롤링 알고리즘 설계 및 실험)

  • Yun-Jeong, Kang;Min-Hye, Lee;Dong-Hyun, Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.1-7
    • /
    • 2023
  • Deep web collection means entering a query in a search form and collecting response results. It is estimated that the information possessed by the deep web has about 450 to 550 times more information than the statically constructed surface web. The static method does not show the changed information until the web page is refreshed, but the dynamic web page method updates the necessary information in real time and provides real-time information without reloading the web page, but crawler has difficulty accessing the updated information. Therefore, there is a need for a way to automatically collect information on these deep webs using a crawler. Therefore, this paper proposes a method of utilizing scripts as general links, and for this purpose, an algorithm that can utilize client scripts like regular URLs is proposed and experimented. The proposed algorithm focused on collecting web information by menu navigation and script execution instead of the usual method of entering data into search forms.

Implementation of Efficient Distributed Crawler through Stepwise Crawling Node Allocation

  • Kim, Hyuntae;Byun, Junhyung;Na, Yoseph;Jung, Yuchul
    • Journal of Advanced Information Technology and Convergence
    • /
    • v.10 no.2
    • /
    • pp.15-31
    • /
    • 2020
  • Various websites have been created due to the increased use of the Internet, and the number of documents distributed through these websites has increased proportionally. However, it is not easy to collect newly updated documents rapidly. Web crawling methods have been used to continuously collect and manage new documents, whereas existing crawling systems applying a single node demonstrate limited performances. Furthermore, crawlers applying distribution methods exhibit a problem related to effective node management for crawling. This study proposes an efficient distributed crawler through stepwise crawling node allocation, which identifies websites' properties and establishes crawling policies based on the properties identified to collect a large number of documents from multiple websites. The proposed crawler can calculate the number of documents included in a website, compare data collection time and the amount of data collected based on the number of nodes allocated to a specific website by repeatedly visiting the website, and automatically allocate the optimal number of nodes to each website for crawling. An experiment is conducted where the proposed and single-node methods are applied to 12 different websites; the experimental result indicates that the proposed crawler's data collection time decreased significantly compared with that of a single node crawler. This result is obtained because the proposed crawler applied data collection policies according to websites. Besides, it is confirmed that the work rate of the proposed model increased.

A Method of Calculating Baseline Productivity by Reflecting Construction Project Data Characteristics (건설 프로젝트 데이터 특성을 반영한 기준생산성 산정 방법)

  • Kim Eunseo;Kim Junyoung;Joo Seonu;Ahn Changbum;Park Moonseo
    • Korean Journal of Construction Engineering and Management
    • /
    • v.24 no.3
    • /
    • pp.3-11
    • /
    • 2023
  • This research examines the need for a quantitative and objective method of calculating baseline productivity in the construction industry, which is known for its high volatility in performance and productivity. The existing literature's baseline productivity calculation methods rely heavily on subjective criteria, limiting their effectiveness. Additionally, data collection methods such as the "Five-minute Rating" are costly and time-consuming, making it challenging to collect detailed data at construction sites. To address these issues, this study proposes an objective baseline calculation method using unimpacted productivity BP, a work check sheet to systematically record detailed data, and a data collection and utilization process that minimizes cost and time requirements. This paper also suggests using unimpacted productivity BP and comparative analysis to address the objectivity and reliability issues of existing baseline productivity calculation methods.

AUTOMATIC DATA COLLECTION TO IMPROVE READY-MIXED CONCRETE DELIVERY PERFORMANCE

  • Pan Hao;Sangwon Han
    • International conference on construction engineering and project management
    • /
    • 2011.02a
    • /
    • pp.187-194
    • /
    • 2011
  • Optimizing truck dispatching-intervals is imperative in ready mixed concrete (RMC) delivery process. Intervals shorter than optimal may induce queuing of idle trucks at a construction site, resulting in a long delivery cycle time. On the other hand, intervals longer than optimal can trigger work discontinuity due to a lack of available trucks where required. Therefore, the RMC delivery process should be systematically scheduled in order to minimize the occurrence of waiting trucks as well as guarantee work continuity. However, it is challenging to find optimal intervals, particularly in urban areas, due to variations in both traffic conditions and concrete placement rates at the site. Truck dispatching intervals are usually determined based on the concrete plant managers' intuitive judgments, without sufficient and reliable information regarding traffic and site conditions. Accordingly, the RMC delivery process often experiences inefficiency and/or work discontinuity. Automatic data collection (ADC) techniques (e.g., RFID or GPS) can be effective tools to assist plant managers in finding optimal dispatching intervals, thereby enhancing delivery performance. However, quantitative evidence of the extent of performance improvement has rarely been reported to data, and this is a central reason for a general reluctance within the industry to embrace these techniques, despite their potential benefits. To address this issue, this research reports on the development of a discrete event simulation model and its application to a large-scale building project in Abu Dhabi. The simulation results indicate that ADC techniques can reduce the truck idle time at site by 57% and also enhance the pouring continuity in the RMC delivery process.

  • PDF

Effect of Garbage Collection in the ZG-machine (ZG-machine에서 기억 장소 재활용 체계의 영향)

  • Woo, Gyun;Han, Tai-Sook
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.7
    • /
    • pp.759-768
    • /
    • 2000
  • The ZG-machine is a space-efficient G-machine, which exploits a simple encoding method, called tag-forwarding, to compress the heap structure of graphs. Experiments on the ZG-machine without garbage collection shows that the ZG-machine saves 30% of heap space and the run-time overhead is no more than 6% than the G-machine. This paper presents the results of further experiments on the ZG-machine with the garbage collector. As a result, the heap-residency of the ZG-machine decreases by 34% on average although the run-time increases by 34% compared to the G-machine. The high rate of the run-time overhead of the ZG-machine is incurred by the garbage collector. However, when the heap size is 7 times the heap-residency, the run-time overhead of the ZG-machine is no more than 12% compared to the G-machine. With the aspect of reduced heap-residency, the ZG-machine may be useful in memory-restricted environments such as embedded systems. Also, with the development of a more efficient garbage collector, the run-time is expected to decrease significantly.

  • PDF

The Time for Collecting of Cryptomeria japonica Seeds

  • Son, Seog-Gu;Kim, Hyo-Jeong;Kim, Chan-Soo;Kang, Young-Je;Kim, Chang-Soo;Byun, Kwang-Ok
    • Korean Journal of Plant Resources
    • /
    • v.22 no.6
    • /
    • pp.535-539
    • /
    • 2009
  • The time of seed collection is regarded as one of major concerns to obtain sound seeds. The physical and germinal aspects of Cryptomeria japonica D. Don (Taxodiaceae) seeds were analyzed to determine the optimum harvesting time in Korea. Cones were picked every 10 days from the $30^{th}$ of July to the $30^{th}$ of October in both 2005 and 2006. Seeds were collected from picked cones. Seed size and weight were not significant in two consecutive years. The 1,000-seed weight was 3.3 g for cones picked at the $18^{th}$ of August and 5.3 g for cones picked at the $30^{th}$ of September. The size of seeds was increased as the time of collection from the $18^{th}$ of August to the $30^{th}$ of September: from 19.3 mm to 21.3 mm in length and from 15.8 mm to 18.5 mm in width. Average germination rates in 2005 was 18.3% and 19.6% in 2006. The highest germination rate was 34.3% from seeds collected at the $30^{th}$ of September in 2005. In 2006, the highest germination rate was 31.7% for seeds collected at the same date as the 2005 seeds. After the end of September, germination rate was decreased in both years. The results implied that the best cone picking time for Korean C. japonica seeds is around the end of September.

Target tracking accuracy and performance bound

  • 윤동훈;엄석원;윤동욱;고한석
    • Proceedings of the IEEK Conference
    • /
    • 1998.06a
    • /
    • pp.635-638
    • /
    • 1998
  • This paper proposes a simple method to measure system's performance in target tracking problems. Essentially employing the Cramer-Rao lower bound (CRLB) on trakcing accuracy, an algorithm of predicting system's performance under various scenarios is developed. The input data is a collection of measurements over time fromsensors embedded in gaussian noise. The target of interest may not maneuver over the processing time interval while the own ship observing platform may maneuver in an arbitrary fashion. Th eproposed approach is demonstrated and discussed through simulation results.

  • PDF