• Title/Summary/Keyword: Download

Search Result 446, Processing Time 0.026 seconds

Collection and Analysis of Electricity Consumption Data in POSTECH Campus (포스텍 캠퍼스의 전력 사용 데이터 수집 및 분석)

  • Ryu, Do-Hyeon;Kim, Kwang-Jae;Ko, YoungMyoung;Kim, Young-Jin;Song, Minseok
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.3
    • /
    • pp.617-634
    • /
    • 2022
  • Purpose: This paper introduces Pohang University of Science Technology (POSTECH) advanced metering infrastructure (AMI) and Open Innovation Big Data Center (OIBC) platform and analysis results of electricity consumption data collected via the AMI in POSTECH campus. Methods: We installed 248 sensors in seven buildings at POSTECH for the AMI and collected electricity consumption data from the buildings. To identify the amounts and trends of electricity consumption of the seven buildings, electricity consumption data collected from March to June 2019 were analyzed. In addition, this study compared the differences between the amounts and trends of electricity consumption of the seven buildings before and after the COVID-19 outbreak by using electricity consumption data collected from March to June 2019 and 2020. Results: Users can monitor, visualize, and download electricity consumption data collected via the AMI on the OIBC platform. The analysis results show that the seven buildings consume different amounts of electricity and have different consumption trends. In addition, the amounts of most buildings were significantly reduced after the COVID-19 outbreak. Conclusion: POSTECH AMI and OIBC platform can be a good reference for other universities that prepare their own microgrid. The analysis results provides a proof that POSTECH needs to establish customized strategies on reducing electricity for each building. Such results would be useful for energy-efficient operation and preparation of unusual energy consumptions due to unexpected situations like the COVID-19 pandemic.

Implementation of a Software Streaming System Using Pagefault Interrupt Routine Hooking (페이지폴트 인터럽트 루틴 후킹을 이용한 소프트웨어 스트리밍 시스템 구현)

  • Kim, Han-Gook;Lee, Chang-Jo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.2
    • /
    • pp.8-15
    • /
    • 2009
  • The need for ASP(Application Service Provider) has evolved from the increasing costs of specialized software that have far exceeded the price rage of small to medium sized businesses. There are a lot of technologies that make ASP possible, and software streaming service is one of them Software streaming is a method for overlapping transmission and execution of stream-enabled software. The stream-enabled software is able to run on a device even while the transmission/streaming of the software may still be in progress. Thus, a user does not have to wait for the completion of the software's download prior to starting to execute the software. In this paper, we suggest the new concept of software streaming system implement using the PageFault Interrupt Routine Hooking. As it is able to efficiently manage application, we do not have to install the entire software. In addition, we can save hardware resources by using it because we load basic binaries without occupying the storage space of the hardware.

Analysis of Behavior Patterns from Human and Web Crawler Events Log on ScienceON (ScienceON 웹 로그에 대한 인간 및 웹 크롤러 행위 패턴 분석)

  • Poositaporn, Athiruj;Jung, Hanmin;Park, Jung Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.6-8
    • /
    • 2022
  • Web log analysis is one of the essential procedures for service improvement. ScienceON is a representative information service that provides various S&T literature and information, and we analyze its logs for continuous improvement. This study aims to analyze ScienceON web logs recorded in May 2020 and May 2021, dividing them into humans and web crawlers and performing an in-depth analysis. First, only web logs corresponding to S (search), V (detail view), and D (download) types are extracted and normalized to 658,407 and 8,727,042 records for each period. Second, using the Python 'user_agents' library, the logs are classified into humans and web crawlers, and third, the session size was set to 60 seconds, and each session is analyzed. We found that web crawlers, unlike humans, show relatively long for the average behavior pattern per session, and the behavior patterns are mainly for V patterns. As the future, the service will be improved to quickly detect and respond to web crawlers and respond to the behavioral patterns of human users.

  • PDF

Implementation of Augmented Reality Website Using Three.js and React (Three JS와 React를 이용한 증강현실 웹사이트 구현)

  • Kim, Seon-hwa;Moon, Sang-Ho;Lee, Sung-jin
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.529-531
    • /
    • 2022
  • According to the 2021 Mobile Augmented Reality Market Report by Korea Innovation Foundation, the augmented reality market has been growing recently due to the growth of the mobile augmented reality market along with the development of smartphones. In order to provide augmented reality services to mobile users, it is necessary to create native apps for each device. However, there are problems such as maintenance costs for multi-platform and low accessibility due to app download. Recently, the construction of augmented reality system in a web-based environment by using the WebXR Device API is in progress, but it is judged that the system using the WebXR Device API is still in the research stage. In this paper, a responsive multi-platform environment was built using the WebXR Device API, Three JS, and React, and a function to provide augmented reality service to mobile and web users was implemented. As a result of the experiment, it was confirmed that the interlocking of augmented reality was successfully implemented in a responsive web environment, and look forward to the augmented reality service operating on the web in the future.

  • PDF

Python Package Production for Agricultural Researcher to Use Meteorological Data (농업연구자의 기상자료 활용을 위한 파이썬 패키지 제작)

  • Hyeon Ji Yang;Joo Hyun Park;Mun-Il Ahn;Min Gu Kang;Yong Kyu Han;Eun Woo Park
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.25 no.2
    • /
    • pp.99-107
    • /
    • 2023
  • Recently, the abnormal weather events and crop damages occurred frequently likely due to climate change. The importance of meteorological data in agricultural research is increasing. Researchers can download weather observation data by accessing the websites provided by the KMA (Korea Meteorological Administration) and the RDA (Rural Development Administration). However, there is a disadvantage that multiple inquiry work is required when a large amount of meteorological data needs to be received. It is inefficient for each researcher to store and manage the data needed for research on an independent local computer in order to avoid this work. In addition, even if all the data were downloaded, additional work is required to find and open several files for research. In this study, data collected by the KMA and RDA were uploaded to GitHub, a remote storage service, and a package was created that allows easy access to weather data using Python. Through this, we propose a method to increase the accessibility and usability of meteorological data for agricultural personnel by adopting a method that allows anyone to take data without an additional authentication process.

The US National Ecological Observatory Network and the Global Biodiversity Framework: national research infrastructure with a global reach

  • Katherine M. Thibault;Christine M, Laney;Kelsey M. Yule;Nico M. Franz;Paula M. Mabee
    • Journal of Ecology and Environment
    • /
    • v.47 no.4
    • /
    • pp.219-227
    • /
    • 2023
  • The US National Science Foundation's National Ecological Observatory Network (NEON) is a continental-scale program intended to provide open data, samples, and infrastructure to understand changing ecosystems for a period of 30 years. NEON collects co-located measurements of drivers of environmental change and biological responses, using standardized methods at 81 field sites to systematically sample variability and trends to enable inferences at regional to continental scales. Alongside key atmospheric and environmental variables, NEON measures the biodiversity of many taxa, including microbes, plants, and animals, and collects samples from these organisms for long-term archiving and research use. Here we review the composition and use of NEON resources to date as a whole and specific to biodiversity as an exemplar of the potential of national research infrastructure to contribute to globally relevant outcomes. Since NEON initiated full operations in 2019, NEON has produced, on average, 1.4 M records and over 32 TB of data per year across more than 180 data products, with 85 products that include taxonomic or other organismal information relevant to biodiversity science. NEON has also collected and curated more than 503,000 samples and specimens spanning all taxonomic domains of life, with up to 100,000 more to be added annually. Various metrics of use, including web portal visitation, data download and sample use requests, and scientific publications, reveal substantial interest from the global community in NEON. More than 47,000 unique IP addresses from around the world visit NEON's web portals each month, requesting on average 1.8 TB of data, and over 200 researchers have engaged in sample use requests from the NEON Biorepository. Through its many global partnerships, particularly with the Global Biodiversity Information Facility, NEON resources have been used in more than 900 scientific publications to date, with many using biodiversity data and samples. These outcomes demonstrate that the data and samples provided by NEON, situated in a broader network of national research infrastructures, are critical to scientists, conservation practitioners, and policy makers. They enable effective approaches to meeting global targets, such as those captured in the Kunming-Montreal Global Biodiversity Framework.

MPEG-D USAC: Unified Speech and Audio Coding Technology (MPEG-D USAC: 통합 음성 오디오 부호화 기술)

  • Lee, Tae-Jin;Kang, Kyeong-Ok;Kim, Whan-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.28 no.7
    • /
    • pp.589-598
    • /
    • 2009
  • As mobile devices become multi-functional, and converge into a single platform, there is a strong need for a codec that is able to provide consistent quality for speech and music content MPEG-D USAC standardization activities started at the 82nd MPEG meeting with a CfP and approved WD3 at the 88th MPEG meeting. MPEG-D USAC is converged technology of AMR-WB+ and HE-AAC V2. Specifically, USAC utilizes three core codecs (AAC ACELP and TCX) for low frequency regions, SBR for high frequency regions and the MPEG Surround tool for stereo information. USAC can provide consistent sound quality for both speech and music content and can be applied to various applications such as multi-media download to mobile device Digital radio Mobile TV and audio books.

Design and Implementation of Clipcast Service via Terrestrial DMB (지상파 DMB를 이용한 클립캐스트 서비스 설계 및 구현)

  • Cho, Suk-Hyun;Seo, Jong-Soo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.23-32
    • /
    • 2011
  • Design and Implementation of Clipcast Service via Terrestrial DMB This paper outlines the system design and the implementation process of clipcast service that can send clips of video, mp3, text, images, etc. to terrestrial DMB terminals. To provide clipcast service in terrestrial DMB, a separate data channel needs to be allocated and this requires changes in the existing bandwidth allocation. Clipcast contents can be sent after midnight at around 3 to 4 AM, when terrestrial DMB viewship is low. If the video service bit rate is lowered to 352 Kbps and the TPEG service band is fully used, then 320 Kbps bit rate can be allocated to clipcast. To enable clipcast service, the terminals' DMB program must be executed, and this can be done through SMS and EPG. Clipcast service applies MOT protocol to transmit multimedia objects, and transmits twice in carousel format for stable transmission of files. Therefore, 72Mbyte data can be transmitted in one hour, which corresponds to about 20 minutes of full motion video service at 500Kbps data rate. When running the clip transmitted through terrestrial DMB data channel, information regarding the length of each clip is received through communication with the CMS(Content Management Server), then error-free files are displayed. The clips can be provided to the users as preview contents of the complete VOD contents. In order to use the complete content, the user needs to access the URL allocated for that specific content and download the content by completing a billing process. This paper suggests the design and implementation of terrestrial DMB system to provide clipcast service, which enables file download services as provided in MediaFLO, DVB-H, and the other mobile broadcasting systems. Unlike the other mobile broadcasting systems, the proposed system applies more reliable SMS method to activate the DMB terminals for highly stable clipcast service. This allows hybrid, i.e, both SMS and EPG activations of terminals for clipcast services.

Design of Client-Server Model For Effective Processing and Utilization of Bigdata (빅데이터의 효과적인 처리 및 활용을 위한 클라이언트-서버 모델 설계)

  • Park, Dae Seo;Kim, Hwa Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.4
    • /
    • pp.109-122
    • /
    • 2016
  • Recently, big data analysis has developed into a field of interest to individuals and non-experts as well as companies and professionals. Accordingly, it is utilized for marketing and social problem solving by analyzing the data currently opened or collected directly. In Korea, various companies and individuals are challenging big data analysis, but it is difficult from the initial stage of analysis due to limitation of big data disclosure and collection difficulties. Nowadays, the system improvement for big data activation and big data disclosure services are variously carried out in Korea and abroad, and services for opening public data such as domestic government 3.0 (data.go.kr) are mainly implemented. In addition to the efforts made by the government, services that share data held by corporations or individuals are running, but it is difficult to find useful data because of the lack of shared data. In addition, big data traffic problems can occur because it is necessary to download and examine the entire data in order to grasp the attributes and simple information about the shared data. Therefore, We need for a new system for big data processing and utilization. First, big data pre-analysis technology is needed as a way to solve big data sharing problem. Pre-analysis is a concept proposed in this paper in order to solve the problem of sharing big data, and it means to provide users with the results generated by pre-analyzing the data in advance. Through preliminary analysis, it is possible to improve the usability of big data by providing information that can grasp the properties and characteristics of big data when the data user searches for big data. In addition, by sharing the summary data or sample data generated through the pre-analysis, it is possible to solve the security problem that may occur when the original data is disclosed, thereby enabling the big data sharing between the data provider and the data user. Second, it is necessary to quickly generate appropriate preprocessing results according to the level of disclosure or network status of raw data and to provide the results to users through big data distribution processing using spark. Third, in order to solve the problem of big traffic, the system monitors the traffic of the network in real time. When preprocessing the data requested by the user, preprocessing to a size available in the current network and transmitting it to the user is required so that no big traffic occurs. In this paper, we present various data sizes according to the level of disclosure through pre - analysis. This method is expected to show a low traffic volume when compared with the conventional method of sharing only raw data in a large number of systems. In this paper, we describe how to solve problems that occur when big data is released and used, and to help facilitate sharing and analysis. The client-server model uses SPARK for fast analysis and processing of user requests. Server Agent and a Client Agent, each of which is deployed on the Server and Client side. The Server Agent is a necessary agent for the data provider and performs preliminary analysis of big data to generate Data Descriptor with information of Sample Data, Summary Data, and Raw Data. In addition, it performs fast and efficient big data preprocessing through big data distribution processing and continuously monitors network traffic. The Client Agent is an agent placed on the data user side. It can search the big data through the Data Descriptor which is the result of the pre-analysis and can quickly search the data. The desired data can be requested from the server to download the big data according to the level of disclosure. It separates the Server Agent and the client agent when the data provider publishes the data for data to be used by the user. In particular, we focus on the Big Data Sharing, Distributed Big Data Processing, Big Traffic problem, and construct the detailed module of the client - server model and present the design method of each module. The system designed on the basis of the proposed model, the user who acquires the data analyzes the data in the desired direction or preprocesses the new data. By analyzing the newly processed data through the server agent, the data user changes its role as the data provider. The data provider can also obtain useful statistical information from the Data Descriptor of the data it discloses and become a data user to perform new analysis using the sample data. In this way, raw data is processed and processed big data is utilized by the user, thereby forming a natural shared environment. The role of data provider and data user is not distinguished, and provides an ideal shared service that enables everyone to be a provider and a user. The client-server model solves the problem of sharing big data and provides a free sharing environment to securely big data disclosure and provides an ideal shared service to easily find big data.

Design and Implementation of the Smart Virtual Machine for Smart Cross Platform (스마트 크로스 플랫폼을 위한 스마트 가상기계의 설계 및 구현)

  • Han, Seong-Min;Son, Yun-Sik;Lee, Yang-Sun
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.2
    • /
    • pp.190-197
    • /
    • 2013
  • Since domestic and foreign platform companies and mobile carriers adopt and use different kinds of smart platforms, developers should develop or convert contents according to each smart platform to provide a single smart content for customers. It takes long time and a lot of money to convert the conventional smart contents in order to serve other smart platforms. For the reason, more attention has been paid on Smart Cross Platform or Hybrid Platform, the core technologies of OSMU(One Source Multi Use) in which, once a program is coded, it can be executed in any platforms regardless of development languages. As a result, PhoneGap and HTML5 based Sencha Touch have been introduced. In this paper, we developed the smart virtual machine, which is built in smart cross platform based smart devices, unlike Android, iOS, Windows Phone devices being dependent of platforms, and helps to download and execute applications, being independent of platforms. the smart virtual machine supports C/C++, and Java language, being differentiated from JVM by sun microsystems that supports only Java language and .NET framework by microsoft that supports only C, C++ and C#. Therefore, it provides contents developers with the environment where they can get a wide range of options in choosing a language and develop smart contents.