• Title/Summary/Keyword: Data standard

Search Result 12,239, Processing Time 0.064 seconds

Is it suitable to Use Rainfall Runoff Model with Observed Data for Climate Change Impact Assessment? (관측자료로 추정한 강우유출모형을 기후변화 영향평가에 그대로 활용하여도 되는가?)

  • Poudel, Niroj;Kim, Young-Oh;Kim, Cho-Rong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.252-252
    • /
    • 2011
  • Rainfall-runoff models are calibrated and validated by using a same data set such as observations. The past climate change effects the present rainfall pattern and also will effect on the future. To predict rainfall-runoff more preciously we have to consider the climate change pattern in the past, present and the future time. Thus, in this study, the climate change represents changes in mean precipitation and standard deviation in different patterns. In some river basins, there is no enough length of data for the analysis. Therefore, we have to generate the synthetic data using proper distribution for calculation of precipitation based on the observed data. In this study, Kajiyama model is used to analyze the runoff in the dry and the wet period, separately. Mean and standard deviation are used for generating precipitation from the gamma distribution. Twenty hypothetical scenarios are considered to show the climate change conditions. The mean precipitation are changed by -20%, -10%, 0%, +10% and +20% for the data generation with keeping the standard deviation constant in the wet and the dry period respectively. Similarly, the standard deviations of precipitation are changed by -20%, -10%, 0%, +10% and +20% keeping the mean value of precipitation constant for the wet and the dry period sequentially. In the wet period, when the standard deviation value varies then the mean NSE ratio is more fluctuate rather than the dry period. On the other hand, the mean NSE ratio in some extent is more fluctuate in the wet period and sometimes in the dry period, if the mean value of precipitation varies while keeping the standard deviation constant.

  • PDF

Study on the Conformance Testing of Data Exchange between Transport Information Center and Terminal Equipment (교통정보센터와 단말기간 데이터교환 기술기준 적합성 시험에 관한 연구)

  • Lee, Sang-Hyun;Kim, Gyeong-Seok
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.7 no.5
    • /
    • pp.147-158
    • /
    • 2008
  • Recently, Intelligent Transportation System (ITS) has been actively developed and built since the Transportation System Efficiency Promotion Act was enacted. However, since mutual connection among transportation information systems was not considered, the integration of transportation information services did not occur. Accordingly, the Ministry of Land Transport and Maritime Affairs established and announced the technical standard on ITS. In this study, the conformance testing of the transportation information and communication system interface standard on data exchange between the Transportation Information Center and terminals was researched The test items were categorized as data request tests and data providing tests by analyzing the communication procedures specified in the standard. A detail testing scenario was created for each item. The test assessment was established based on the conformance of data exchange procedures and the accuracy of data packet messages. Under the established technical standard, the number of times that tests should be performed was thought set to 30 and the success rate was set to 95%. The purpose of this study is to help the ITS of Korea perform the integrated management of transportation information by researching methods for conformance testing on the technical standard on ITS.

  • PDF

Development of IFC Standard for Securing Interoperability of BIM Data for Port Facilities (항만 BIM 데이터의 상호운용성 확보를 위한 IFC 표준 개발)

  • Moon, Hyoun-Seok;Won, Ji-Sun;Shin, Jae-Young
    • Journal of KIBIM
    • /
    • v.10 no.1
    • /
    • pp.9-22
    • /
    • 2020
  • Recently, BIM has been extended to infrastructures such as roads and bridges, and the demand for BIM standard development for ports is increasing internationally. Due to the low level of utilization of classification system and drawing standards compared to other infrastructures, and the closed nature of national security facilities, ports have insufficient level of connection and sharing environment among external systems or users. In addition, since the standardization of data for port facilities is not made, it is still necessary to establish an independent DB for each system and to ensure interoperability of data between these systems since it does not have a shared environment among similar data. Therefore, the purpose of this study is to develop and verify IFC, the international standard for BIM, in order to cope with the BIM environment and to be commonly used in the design, construction, and maintenance of port facilities. To this end, we build a standard schema with port-specific Express Notation according to buildingSMART International's standard development methodology. First, domestic and international reference model standards were analyzed to derive components such as space and facilities of port facilities. Based on this, the components of the port facility were derived through the codification, categorization, and normalization process developed by the research team. This was extended based on the port BIM object classification system developed by the research team. Normalization results were verified by designers and associations. Then, IFC schema construction was based on Express-G data modeling based on IFC 4 * 2 Candidate, which is a bridge candidate standard based on IFC4 (ISO16739), and IFC 4 * 3 Draft, which is developed by buildingSMART International. The final schema was validated using the commercialized validation tool. In addition, in order to verify the structural verification of the port IFC schema, the transformation process was verified by converting the caisson model into a Part21 file. In the future, this result will not only be used as a delivery standard for port BIM products, but will also be applied as a linkage standard between systems and a common data format for port BIM platforms when BIM is used in the maintenance phase. In particular, it is expected to be used as a core standard for data exchange in the port maintenance stage.

Construction of an International Standard-Based Plant Data Repository Utilizing Web Services Technology (웹 서비스 기술을 활용한 국제 표준 기반의 플랜트 데이터 저장소의 구현)

  • Mun, Du-Hwan;Kim, Byung-Chul
    • IE interfaces
    • /
    • v.23 no.3
    • /
    • pp.213-220
    • /
    • 2010
  • As the market becomes increasingly globalized and competition among companies increases in severity, various specialized organizations are participating across the process plant lifecycle, including the stages of design, construction, operation and maintenance, and dismantlement, in order to ensure efficiency and elevate competitiveness. In this regard, it is an important technical issue to develop services or information systems for sharing process plant data among participating organizations. ISO 15926 is an international standard for integration of lifecycle data for process plants including oil and gas facilities. ISO 15926 Part 7, a part of the ISO 15926 standard, specifies an implementation method called a facade that uses Web Services and ontology technologies for constructing plant data repositories and related services, with the aim of sharing lifecycle data of process plants. This paper discusses the ISO 15926-based prototype facade implemented for storing equipment data of nuclear power plants and servicing the data to interested organizations.

A Method for Engineering Change Analysis by Using OLAP (OLAP를 이용한 설계변경 분석 방법에 관한 연구)

  • Do, Namchul
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.2
    • /
    • pp.103-110
    • /
    • 2014
  • Engineering changes are indispensable engineering and management activities for manufactures to develop competitive products and to maintain consistency of its product data. Analysis of engineering changes provides a core functionality to support decision makings for engineering change management. This study aims to develop a method for analysis of engineering changes based on On-Line Analytical Processing (OLAP), a proven database analysis technology that has been applied to various business areas. This approach automates data processing for engineering change analysis from product databases that follow an international standard for product data management (PDM), and enables analysts to analyze various aspects of engineering changes with its OLAP operations. The study consists of modeling a standard PDM database and a multidimensional data model for engineering change analysis, implementing the standard and multidimensional models with PDM and data cube systems and applying the implemented data cube to core functions of engineering change management, the evaluation and propagation of engineering changes.

면삭밀링의 합리적인 표준시간 계산방법에 관한 연구

  • 박규생;김준안;김선태;김병현;정성련
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1992.04a
    • /
    • pp.250-255
    • /
    • 1992
  • This paper discusses how to develop a standard time for face cutting. The discussion focusses especially on the useful experert and law data for automated generate standard time purposes. Make standard time is a means to realize the process planning. Also process planning is a process which expresses design. In past times, a process planning was done using only experience of expert. But nowadays many people try to make automated process planning. This paper discusses standard time of the face cutting, but except making process sequence. In order to make standard time, some rules have to be generated and some industrial data found out. So we can calulate standard time in die. This is to easer and to correct calulate standard time. Using some rules that are application oriented to every parts of die.

Constructing a Standard Clinical Big Database for Kidney Cancer and Development of Machine Learning Based Treatment Decision Support Systems (신장암 표준임상빅데이터 구축 및 머신러닝 기반 치료결정지원시스템 개발)

  • Song, Won Hoon;Park, Meeyoung
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.6_2
    • /
    • pp.1083-1090
    • /
    • 2022
  • Since renal cell carcinoma(RCC) has various examination and treatment methods according to clinical stage and histopathological characteristics, it is required to determine accurate and efficient treatment methods in the clinical field. However, the process of collecting and processing RCC medical data is difficult and complex, so there is currently no AI-based clinical decision support system for RCC treatments worldwide. In this study, we propose a clinical decision support system that helps clinicians decide on a precision treatment to each patient. RCC standard big database is built by collecting structured and unstructured data from the standard common data model and electronic medical information system. Based on this, various machine learning classification algorithms are applied to support a better clinical decision making.

Data Conversion using SEDRIS STF (SEDRIS STF를 이용한 데이터 변환)

  • Lee, Kwang-Hyung
    • The Journal of Korean Association of Computer Education
    • /
    • v.7 no.5
    • /
    • pp.101-110
    • /
    • 2004
  • The multimedia community needs an environmental data representation and interchange mechanism which not only satisfies the requirements of today's systems, but can be extended to meet future data sharing needs. This mechanism must allow for the standard representation of, and access to data. The SEDRIS STF(SEDRIS Transmittal Format)provides environment data users and producers with a clearly defined interchange specification. In this paper I develop data converter commercial data(3DS MAX) format to standard interchange format and vice verse without losing semantic of information content using SEDRIS standard interchange format.

  • PDF

A Case Study on Recordkeeping Metadata Standard Applying Multiple Entities (다중 개체 모형을 적용한 기록관리 메타데이터 표준 사례분석)

  • Lee, Ju-Yeon
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.10 no.2
    • /
    • pp.193-214
    • /
    • 2010
  • The multiple entity data model which contains metadata that associate two or more entities is applied recordkeeping metadata standard in recent years. This paper described and analyzed the recordkeeping metadata standard applying multiple entities such as ISO 23081, Australia recordkeeping metadata Standard, New Zealand recordkeeping metadata Standard, New South Wales recordkeeping metadata Standard, Queensland recordkeeping metadata Standard recordkeeping metadata Standard, South Australia recordkeeping metadata Standard, focusing on scope, the number of entities, category in entity, metadata elements. And shows some examples of relationship entity which is the key of multiple entity. As a result of the analysis, this paper suggests some consideration when recordkeeping metadata standard applying multiple entities is revised.

A Study on the Quality Improvement of College Scholastic Ability Test Scoring System (대학수학능력시험 점수산정시스템의 품질 제고를 위한 연구)

  • Park, Youngsun
    • Journal of Korean Society for Quality Management
    • /
    • v.50 no.2
    • /
    • pp.199-220
    • /
    • 2022
  • Purpose: The purpose of this study is to analyze the score data released by the Korea Institute of Curriculum and Evaluation to find out the problems with the current scoring system provided by the College Scholastic Ability Test and to suggest improvement measures to solve these problems. Methods: We calculated the descriptive statistics of the standard scores using the frequency distribution table of the standard scores and identified the characteristics of the standard scores by expressing the distribution as a graph. Also, we developed an index to evaluate whether each stanine level was stably assigned and calculated the indexes for each area/subject by using the data on the number of examinees for each level. Results: We found that the relationship of conversion from raw scores to integerized standard scores is different depending on the size of the standard deviation of the raw scores, and identified the problem that the raw score information is not fairly reflected in the calculation of the percentile and level as the two raw scores are converted to one standard score. This problem can be solved by calculating the standard score to a decimal point. Conclusion: In this study, as a way to improve the quality of the scores of the current CSAT, the standard score and percentile decimal notation, the specific regulations of the standard score and stanine level calculation method, and the expansion of the open range of the scores were suggested.