• Title/Summary/Keyword: Database Quality Management

Search Result 464, Processing Time 0.026 seconds

The Development of Effective Database Model for Pipe Network Management Monitoring Program (상수도 관망시스템의 유지관리용 모니터링 프로그램을 위한 효율적 D/B 모델의 개발)

  • Kang, Byung-Mo;Lee, Hyun-Dong;Hong, In-Sik
    • Journal of Internet Computing and Services
    • /
    • v.9 no.4
    • /
    • pp.157-166
    • /
    • 2008
  • There has been a renewal of interest in management of underground facility in recent years. As several research have been made on management for underground facility of waterworks pipe. In this paper smart-tag is defined and applied to requiring ubiquitous environment. Also, GPS is essential technology for the implementation of proposed program, which has GPS and RFID mixed business model. And it manages data of underground facility with RFID system effectively and provides the high quality practical effectiveness of entire system through GPS receiving module and network communication on GIS. In conclusion, this paper proposed applications management system with location mixed database. Proposed database and interface skill is tested and evaluated through the simulation.

  • PDF

Directions for Developing Database Schema of Records in Archives Management Systems (영구기록물관리를 위한 기록물 데이터베이스 스키마 개발 방향)

  • Yim, Jin-Hee;Lee, Dae-Wook;Kim, Eun-Sil;Kim, Ik-Han
    • The Korean Journal of Archival Studies
    • /
    • no.34
    • /
    • pp.57-105
    • /
    • 2012
  • The CAMS(Central Archives Management System) of NAK(National Archives of Korea) is an important system which receives and manages large amount of electronic records annually from 2015. From the point of view in database design, this paper analyzes the database schema of CAMS and discusses the direction of overall improvement of the CAMS. Firstly this research analyzes the tables for records and folders in the CAMS database which are core tables for the electronic records management. As a result, researchers notice that it is difficult to trust the quality of the records in the CAMS, because two core tables are entirely not normalized and have many columns whose roles are unknown. Secondly, this study suggests directions of normalization for the tables for records and folders in the CAMS database like followings: First, redistributing the columns into proper tables to reduce the duplication. Second, separating the columns about the classification scheme into separate tables. Third, separating the columns about the records types and sorts into separate tables. Lastly, separating metadata information related to the acquisition, takeover and preservation into separate tables. Thirdly, this paper suggests considerations to design and manage the database schema in each phase of archival management. In the ingest phase, the system should be able to process large amount of records as batch jobs in time annually. In the preservation phase, the system should be able to keep the management histories in the CAMS as audit trails including the reclassification, revaluation, and preservation activities related to the records. In the access phase, the descriptive metadata sets for the access should be selected and confirmed in various ways. Lastly, this research also shows the prototype of conceptual database schema for the CAMS which fulfills the metadata standards for records.

Methodological Issues in Internet Survey and Development of Personalized Internet Survey System Using Data Mining Techniques (인터넷 설문조사의 방법론적인 문제점과 데이터마이닝 기법을 활용한 개인화된 인터넷설문조사 시스템의 구축)

  • 김광용;김기수
    • Journal of Korean Society for Quality Management
    • /
    • v.32 no.2
    • /
    • pp.93-108
    • /
    • 2004
  • The purpose of this research is to summarize the methodological issues in internet survey and to suggest personalized internet survey system using data mining technique for enhancing the survey quality of internet survey as well as utilizing the benefit of interactive multimedia factors of internet survey. The data mining technique used in this paper is Case Based Reasoning for adopting individual design preference affecting survey quality. For achieving the research purpose, two surveys, pre & post survey, were performed. Pre survey was done for implementing CBR database to find individual index affecting survey quality and post survey was used for measuring the peformance of personalized internet survey system. The result shows that the survey quality of personalized web survey system is better than generalized web survey system.

Application of Realtime Monitoring of Oceanic Conditions in the Coastal Water for Environmental Management

  • Choi, Yang-Ho;Ro, Young-Jae
    • Journal of the korean society of oceanography
    • /
    • v.39 no.2
    • /
    • pp.148-154
    • /
    • 2004
  • This study describes the realtime monitoring system for water quality conditions in coastal waters. Some issues on the data qualify control and quality analysis are examined along with examples of erroneous data. Three different cases of database produced by the realtime monitoring system are presented and analyzed, namely 1) hypoxic condition, 2) over-saturated D.O. and 3) short-term variability of temperature and D.O. In utilizing the realtime database, D.O. prediction and warning models are developed based on autoregressive stochastic process. The model is very simple, yet, users in various levels from powerful and useful with its ability to send warning messages to users in varous levels from governmental administrative staff to local fisherman, and give them some allowances to cope with the situation.

Development of Climate & Environment Data System for Big Data from Climate Model Simulations (대용량 기후모델자료를 위한 통합관리시스템 구축)

  • Lee, Jae-Hee;Sung, Hyun Min;Won, Sangho;Lee, Johan;Byu, Young-Hwa
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.75-86
    • /
    • 2019
  • In this paper, we introduce a novel Climate & Environment Database System (CEDS). The CEDS is developed by the National Institute of Meteorological Sciences (NIMS) to provide easy and efficient user interfaces and storage management of climate model data, so improves work efficiency. In uploading the data/files, the CEDS provides an option to automatically operate the international standard data conversion (CMORization) and the quality assurance (QA) processes for submission of CMIP6 variable data. This option increases the system performance, removes the user mistakes, and increases the level of reliability as it eliminates user operation for the CMORization and QA processes. The uploaded raw files are saved in a NAS storage and the Cassandra database stores the metadata that will be used for efficient data access and storage management. The Metadata is automatically generated when uploading a file, or by the user inputs. With the Metadata, the CEDS supports effective storage management by categorizing data/files. This effective storage management allows easy and fast data access with a higher level of data reliability when requesting with the simple search words by a novice. Moreover, the CEDS supports parallel and distributed computing for increasing overall system performance and balancing the load. This supports the high level of availability as multiple users can use it at the same time with fast system-response. Additionally, it deduplicates redundant data and reduces storage space.

The Allocation of Inspection Efforts Using a Knowledge Based System

  • Kang, Kyong-sik;Stylianides, Christodoulos;La, Seung-houn
    • Journal of Korean Society for Quality Management
    • /
    • v.18 no.2
    • /
    • pp.18-24
    • /
    • 1990
  • The location of inspection stations is a significant component of production systems. In this paper, a prototype expert system is designed for deciding the optimal location of inspection stations. The production system is defined as a single channel of n serial operation stations. The potential inspection station can be located after any of the operation stations. Nonconforming units are generated from a compound binomial distribution with known parameters at any given operation station. Traditionally Dynamic programming, Zero-one integer programming, or Non-linear programming techniques are used to solve this problem. However a problem with these techniques is that the computation time becomes prohibitively large when t be number of potential inspection stations are fifteen or more. An expert system has the potential to solve this problem using a rule-based system to determine the near optimal location of inspection stations. This prototype expert system is divided into a static database, a dynamic database and a knowledge base. Based on defined production systems, the sophisticated rules are generated by the simulator as a part of the knowledge base. A generate-and-test inference mechanism is utilized to search the solution space by applying appropriate symbolic and quantitative rules based on input data. The goal of the system is to determine the location of inspection stations while minimizing total cost.

  • PDF

A Study on the Scheme of Information System Audit for Institute of Knowledge Information (지식정보 관리기관을 위한 정보시스템 감리 추진방안에 관한 연구)

  • Lee, Sang-Jun;Ra, Jong-Hei;Go, Hyung-Dae;Shin, Ki-Jung
    • Journal of Information Technology Services
    • /
    • v.5 no.3
    • /
    • pp.121-135
    • /
    • 2006
  • With the growth and maturation of IT industry, the necessity of audit about development, maintenance and management of high-quality information system is gradually increasing. In addition, the necessity of inner auditing system, which could totally verify and evaluate the effectiveness of project according to the characteristics of organization conducting information-oriented business, also being proposed. Government offices including Korea Institute of Science and Technology(KISTI) collectively controlling nationwide science-technology related information have no guiding principle or organization within themselves even though performing information-oriented businesses are becoming more bigger and complicated. In this paper, we propose scheme for devising framework, which can audit construction and operation of knowledge information, check list and guideline. In addition, we present concrete ways for adapting these schemes to institutes which manage science-technology knowledge information. Audit framework consists of points of time in audit, audit domain and audit criterion. Points of time in audit are defined as three phases as followings: pre-audit, in-progress audit and post-audit. Audit domain includes 16 detail audit domains and especially we set 11 check items and 40 detail investigation items for database implementation business. We expect that management level of science-technology implementation business of organizations using this research result will increase and they could offer high-quality information service.

Application of Systems Engineering in SURION R&D Project (수리온 체계개발에서의 시스템엔지니어링 적용사례)

  • Kim, Jin Hoon;Rhee, Dong Wook
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.10 no.1
    • /
    • pp.81-86
    • /
    • 2014
  • Systems Engineering application focused on requirement definition and management for SURION System Development Project is described in this paper. To perform the development process effectively based on systems engineering processes; Requirements Definition, Design Review, Configuration Management, Quality Assurance, Data Management, are applied in this project. In this processes, the Stakeholder's Requirements are transferred to Specifications verified with design reviews, prototyping inspections, and integration tests. All engineering data, especially verification plan & results are recorded, and traced to each requirements of system specification in Surion database. Therefore, it could be assured that all requirements of specification are evaluated and verified successfully in Surion Project.

Evaluating the Performance Quality of Open Source Database Management Systems (오픈소스 DBMS의 성능 품질 평가)

  • Min, Meekyung
    • Journal of Korean Society for Quality Management
    • /
    • v.45 no.4
    • /
    • pp.933-942
    • /
    • 2017
  • Purpose: The purpose of this paper is to evaluate the performance quality of the open source DBMSs. Performance quality is defined as processing time for Join queries. Query processing time is measured and compared in the most widely used open source DBMSs and commercial DBMS. Methods: By varying the number of tuples of two relations to be joined, the average processing time(seconds) of a Join query in each DBMS was obtained experimentally. ANOVA and Tukey HSD test were used in order to compare the performance quality of DBMSs. Results: There was a significant difference between the performance qualities of the three DBMSs at all experimental levels where the number of tuples was 100, 1,000, 2,000, 10,000, and 50,000. As a result of the Tukey HSD test, two open source DBMSs (MariaDB, MySQL) were classified in the same group only at the tuple level of 100. The commercial DBMS (MS-SQL Server) belonged to another group. At level of more than 1,000 tuples, all three DBMSs belonged to different groups. Conclusion: Within the open source DBMS group, MariaDB showed the better performance quality except for a small number of tuples. Thus the results show that MariaDB can be the alternative to MySQL which is currently most widely used. Between open source DBMS and commercial DBMS groups, MS-SQL Server always shows the best performance quality, but the less number of tuples, the less the difference.

Quality Evaluation and Management of a Shared Cataloging DB: the Case of KERIS UNICAT DB (공동목록 DB의 품질평가와 품질관리: KERIS의 종합목록 DB를 중심으로)

  • Lee, Jae-Whoan
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.36 no.1
    • /
    • pp.61-90
    • /
    • 2002
  • This study intends to evaluate the quality of the KERIS UNICAT DB, and to suggest both theoretical and practical methods for the quality improvement of the DB. To the end. this study developed a quality evaluation model and verified the quality of the UNICAT DB in a comprehensive way, Emphasis was on analyzing the factors causing such inferior and substandard bibliographic records in the UNICAT DB. Also suggested are the management strategies and substantial guidelines to improve the quality of the UNICAT DB.