• Title/Summary/Keyword: file monitoring

Search Result 123, Processing Time 0.058 seconds

A Study on Automatic Generation Method of Proxy Client Code to Quality Information Collection (품질 정보 수집을 위한 프록시 클라이언트 코드의 자동 생성 방안에 관한 연구)

  • Seo, young-jun;Han, jung-soo;Song, young-jae
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2007.11a
    • /
    • pp.121-125
    • /
    • 2007
  • This paper proposes automatic generation method of proxy client code to automation of web service selection process through a monitoring agent. The technique of this paper help service consumer to provide source code of proxy client as it bring an attribute value of specific element of WSDL document using template rule. Namely, a XSLT script file provide code frame of dynamic invocation interface model. The automatic code generation technique need to solving starvation status of selection architecture. It is required to creating request HTTP message for every service on the result of search. The created proxy client program code generate dummy message about services. The proposed client code generation method show us a possibility of application in the automatic generation programming domain.

  • PDF

Automatic Generation Method of Proxy Client Code to Autonomic Quality Information (자율적인 웹 서비스 품질 정보 수집을 위한 프록시 클라이언트 코드의 자동 생성 방안)

  • Seo, Young-Jun;Han, Jung-Soo;Song, Young-Jae
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.228-235
    • /
    • 2008
  • This paper proposes automatic generation method of proxy client code to automation of web service selection process through a monitoring agent. The technique of this paper help service consumer to provide source code of proxy client as it bring an attribute value of specific element of WSDL document using template rule. Namely, a XSLT script file provide code frame of dynamic invocation interface model. The automatic code generation technique need to solving starvation status of selection architecture. It is required to creating request HTTP message for every service on the result of search. The created proxy client program code generate dummy message about services. The proposed client code generation method show us a possibility of application in the automatic generation programming domain.

A Study on Standardization of Copyright Collective Management for Digital Contents (디지털콘덴츠 집중관리를 위한 표준화에 관한 연구)

  • 조윤희;황도열
    • Journal of the Korean Society for information Management
    • /
    • v.20 no.1
    • /
    • pp.301-320
    • /
    • 2003
  • The rapidly increasing use of the Internet and advancement of the communication network, the explosive growth of digital contents from personal home pages to professional information service the emerging file exchange service and the development of hacking techniques . these are some of the trends contributing to the spread of illegal reproduction and distribution of digital contents, thus threatening the exclusive copyrights of the creative works that should be legally protected Accordingly, there is urgent need for a digital copyright management system designed to provide centralized management while playing the role of bridge between the copyright owners and users for smooth trading of the rights to digital contents, reliable billing, security measures, and monitoring of illegal use. Therefore, in this study, I examined the requirements of laws and systems for the introduction of the centralized management system to support smooth distribution of digital contents, and also researched on the current status of domestic and international centralized management system for copyrights. Furthermore, 1 tried to provide basic materials for the standardization of digital contents copyright management information through the examination of the essential elements of the centralized digital contents management such as the system for unique identification the standardization for data elements, and the digital rights management (DHM) .

Real-time Classification of Internet Application Traffic using a Hierarchical Multi-class SVM

  • Yu, Jae-Hak;Lee, Han-Sung;Im, Young-Hee;Kim, Myung-Sup;Park, Dai-Hee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.4 no.5
    • /
    • pp.859-876
    • /
    • 2010
  • In this paper, we propose a hierarchical application traffic classification system as an alternative means to overcome the limitations of the port number and payload based methodologies, which are traditionally considered traffic classification methods. The proposed system is a new classification model that hierarchically combines a binary classifier SVM and Support Vector Data Descriptions (SVDDs). The proposed system selects an optimal attribute subset from the bi-directional traffic flows generated by our traffic analysis system (KU-MON) that enables real-time collection and analysis of campus traffic. The system is composed of three layers: The first layer is a binary classifier SVM that performs rapid classification between P2P and non-P2P traffic. The second layer classifies P2P traffic into file-sharing, messenger and TV, based on three SVDDs. The third layer performs specialized classification of all individual application traffic types. Since the proposed system enables both coarse- and fine-grained classification, it can guarantee efficient resource management, such as a stable network environment, seamless bandwidth guarantee and appropriate QoS. Moreover, even when a new application emerges, it can be easily adapted for incremental updating and scaling. Only additional training for the new part of the application traffic is needed instead of retraining the entire system. The performance of the proposed system is validated via experiments which confirm that its recall and precision measures are satisfactory.

Archival Description and Access in Digital Age that Focuses on the Practices of The National Archives' (디지털 시대의 기록물 기술과 접근 - The National Archives 사례를 중심으로 -)

  • Park, Zi-young
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.17 no.4
    • /
    • pp.87-107
    • /
    • 2017
  • Because of the creation and transfer of born-digital records, file unit-based record description practices have changed fundamentally. In this study, we analyzed the archival description practices of The National Archives (TNA) to maintain intellectual control in the digital records management environment and to support the access to records of users. TNA has created an archival description based on ISAD(G) but, for describing born-digital records, it changed the guideline for descriptive cataloging practices. As the method of ISAD(G) cannot adhere to born-digital records, the next-generation descriptive standard, Records in Contexts (RiC), is still being developed by ICA. In addition to international efforts, we need to build an archival description system that fits our environment, especially because since the year 2000, TNA's online cataloging system has changed and ISAD(G) has been modified in this process. This study also proposed continuous monitoring of digital archival descriptions, provides an integrated approach to analog records and digital content and strengthens experimentation and cooperation toward an uncertain digital future.

Implementation of Scenario-based AI Voice Chatbot System for Museum Guidance (박물관 안내를 위한 시나리오 기반의 AI 음성 챗봇 시스템 구현)

  • Sun-Woo Jung;Eun-Sung Choi;Seon-Gyu An;Young-Jin Kang;Seok-Chan Jeong
    • The Journal of Bigdata
    • /
    • v.7 no.2
    • /
    • pp.91-102
    • /
    • 2022
  • As artificial intelligence develops, AI chatbot systems are actively taking place. For example, in public institutions, the use of chatbots is expanding to work assistance and professional knowledge services in civil complaints and administration, and private companies are using chatbots for interactive customer response services. In this study, we propose a scenario-based AI voice chatbot system to reduce museum operating costs and provide interactive guidance services to visitors. The implemented voice chatbot system consists of a watcher object that detects the user's voice by monitoring a specific directory in real-time, and an event handler object that outputs AI's response voice by performing inference by model sequentially when a voice file is created. And Including a function to prevent duplication using thread and a deque, GPU operations are not duplicated during inference in a single GPU environment.

A Study on Data Requirements and Quality Verification for Legal Deposit and Acquisition Tasks of Domestic Electronic Publications (국내 유통 전자출판물의 납본 및 수집을 위한 데이터 요구사항 및 품질 검증 연구)

  • Gyuhwan Kim;Soojung Kim;Daekeun Jeong
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.35 no.1
    • /
    • pp.127-148
    • /
    • 2024
  • This study aimed to propose considerations for attributes and their standardization strategies during the data collection process for electronic publications by domestic distributors for the National Library of Korea. The research identified a total of 21 essential and optional attributes based on a survey and a Focused Group Interview (FGI) with the staff responsible for legal deposit and acquisition tasks at the National Library of Korea. Additional attributes were found necessary during the data quality verification process, leading to the specification of essential and optional attributes for various types of materials, including eBooks, audiobooks, webtoons, and web novels. The standardization of attribute values, essential for enhancing the identifiability and management efficiency of electronic publications, included adherence to ISO 8601 rules for dates and times, clear designation of limited-range attribute values such as file format and adult content, and detailed description of information related to titles. Furthermore, the study highlighted the need for establishing standardized metadata requirements and continuous data quality management and monitoring systems.

Realization of a Web-based Distribution System for the Monitoring of Business Press Releases and News Gathering Robots (기업 보도자료 모니터링을 위한 웹기반 배포시스템 및 기사 수집로봇 구현)

  • Shin, Myeong-Sook;Oh, Jung-Jin;Lee, Joon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.12
    • /
    • pp.103-111
    • /
    • 2013
  • At present, a variety of Korean news stories have been about important online content and its importance in the press is becoming higher. Diverse news from businesses are provided to the public as press releases through newspapers or broadcasting media. For such news to become information for a press release, enterprises visit reporters, use e-mails, faxes, or couriers to deliver the information. However, such methods have problems with time, human resources, expenses, and file damage. Also, with these methods it is bothersome for enterprises to check what has been released and for the press to make frequent contact with enterprises for interviews and for content to be released. Therefore, this study aimed to realize a distribution system which enterprises can use to distribute data to be released to the press and to easily check what is to be released while the press can ask for interview requests in a simple way, as well as a news gathering robot that can collects news on the enterprises involved from articles online or in portal sites.

A Study on Intelligent Self-Recovery Technologies for Cyber Assets to Actively Respond to Cyberattacks (사이버 공격에 능동대응하기 위한 사이버 자산의 지능형 자가복구기술 연구)

  • Se-ho Choi;Hang-sup Lim;Jung-young Choi;Oh-jin Kwon;Dong-kyoo Shin
    • Journal of Internet Computing and Services
    • /
    • v.24 no.6
    • /
    • pp.137-144
    • /
    • 2023
  • Cyberattack technology is evolving to an unpredictable degree, and it is a situation that can happen 'at any time' rather than 'someday'. Infrastructure that is becoming hyper-connected and global due to cloud computing and the Internet of Things is an environment where cyberattacks can be more damaging than ever, and cyberattacks are still ongoing. Even if damage occurs due to external influences such as cyberattacks or natural disasters, intelligent self-recovery must evolve from a cyber resilience perspective to minimize downtime of cyber assets (OS, WEB, WAS, DB). In this paper, we propose an intelligent self-recovery technology to ensure sustainable cyber resilience when cyber assets fail to function properly due to a cyberattack. The original and updated history of cyber assets is managed in real-time using timeslot design and snapshot backup technology. It is necessary to secure technology that can automatically detect damage situations in conjunction with a commercialized file integrity monitoring program and minimize downtime of cyber assets by analyzing the correlation of backup data to damaged files on an intelligent basis to self-recover to an optimal state. In the future, we plan to research a pilot system that applies the unique functions of self-recovery technology and an operating model that can learn and analyze self-recovery strategies appropriate for cyber assets in damaged states.

Patent Application Research Analysis on Domestic Smart Factory Technology Through SNA (SNA를 통한 국내 스마트공장 기술에 관한 특허 출원 조사 분석)

  • Jae-Hyo Hwang;Ki-Jung Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.267-274
    • /
    • 2024
  • In this paper, we investigated the number of domestic patent applications by year, the number of domestic patent disclosures by year, and the number of domestic registrations by year regarding smart factories. The number of patent applications by applicant type was investigated. Based on the patents studied, it was found that the IPC appearing in the most patents was G05B 19/418. In addition, through social network analysis of smart factory patented IPCs, it was found that G05B 19/418 was the IPC with the highest degree of centrality. From the above, if the IPC of the core technology of the patent submitted for smart factory is G05B 19/418, the technology combined with G05B 23/02, that is, the technology combining "factory control" and "monitoring" is the most patented. When the IPC of the core technology was G06Q 50/04, it was confirmed that the technology combined with G06Q 50/10, that is, the technology combining "manufacturing" and "service" was the most applied for patents. Through this, it was found that in order to apply for a patent for a smart factory, it would be necessary to file a patent application that takes into account the connectivity between IPCs.