• Title/Summary/Keyword: Basis sub-model

Search Result 167, Processing Time 0.027 seconds

A Study on Transfer Process Model for long-term preservation of Electronic Records (전자기록의 장기보존을 위한 이관절차모형에 관한 연구)

  • Cheon, kwon-ju
    • The Korean Journal of Archival Studies
    • /
    • no.16
    • /
    • pp.39-96
    • /
    • 2007
  • Traditionally, the concept of transfer is that physical records such as paper documents, videos, photos are made a delivery to Archives or Records centers on the basis of transfer guidelines. But, with the automation of records management environment and spreading new records creation and management applications, we can create records and manage them in the cyberspace. In these reasons, the existing transfer system is that we move filed records to Archives or Records centers by paper boxes, needs to be changed. Under the needing conditions of a new transfer paradigm, the fact that the revision of Records Act that include some provisions about electronic records management and transfer, is desirable and proper. Nevertheless, the electronic transfer provisions are too conceptional to apply records management practice, so we have to develop detailed methods and processes. In this context, this paper suggest that a electronic records transfer process model on the basis of international standard and foreign countries' cases. Doing transfer records is one of the records management courses to use valuable records in the future. So, both producer and archive have to transfer records itself and context information to long-term preservation repository according to the transfer guidelines. In the long run, transfer comes to be the conclusion that records are moved to archive by a formal transfer process with taking a proper records protection steps. To accomplish these purposes, I analyzed the 'OAIS Reference Model' and 'Producer-Archive Interface Methodology Abstract Standard-CCSDS Blue Book' which is made by CCSDS(Consultative committee for Space Data Systems). but from both the words of 'Reference Model' and 'Standard', we can understand that these standard are not suitable for applying business practice directly. To solve this problem, I also analyzed foreign countries' transfer cases. Through the analysis of theory and case, I suggest that an Electronic Records Transfer Process Model which is consist of five sub-process that are 'Ingest prepare ${\rightarrow}$ Ingest ${\rightarrow}$ Validation ${\rightarrow}$ Preservation ${\rightarrow}$ Archival storage' and each sub-process also have some transfer elements. Especially, to confirm the new process model's feasibility, after classifying two types - one is from Public Records center to Public Archive, the other is from Civil Records center to Public or Civil Archive - of Korean Transfer, I made the new Transfer Model applied to the two types of transfer cases.

Study on Governing Equations for Modeling Electrolytic Reduction Cell (전해환원 셀 모델링을 위한 지배 방정식 연구)

  • Kim, Ki-Sub;Park, Byung Heung
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.12 no.3
    • /
    • pp.245-251
    • /
    • 2014
  • Pyroprocess for treating spent nuclear fuels has been developed based on electrochemical principles. Process simulation is one of the important methods for process development and experimental data analysis and it is also a necessary approach for pyroprocessing. To date, process simulation of pyroprocessing has been focused on electrorefining and there have been not so many investigations on electrolytic reduction. Electrolytic reduction, unlike electrorefining, includes specific features of gas evolution and porous electrode and, thus, different equations should be considered for developing a model for the process. This study summarized required concepts and equations for electrolytic reduction model development from thermodynamic, mass transport, and reaction kinetics theories which are necessitated for analyzing an electrochemical cell. An electrolytic reduction cell was divided and equations for each section were listed and, then, boundary conditions for connecting the sections were indicated. It is expected that those equations would be used as a basis to develop a simulation model for the future and applied to determine parameters associated with experimental data.

Evaluation of the Thermal Margin in a KOFA-Loaded Core by a Multichannel Analysis Methodology (다수로해석 방법론에 의한 국산핵연료 노심 열적 여유도 평가)

  • D. H. Hwang;Y. J. Yoo;Park, J. R.;Kim, Y. J.
    • Nuclear Engineering and Technology
    • /
    • v.27 no.4
    • /
    • pp.518-531
    • /
    • 1995
  • A study has been Peformed to investigate the thermal margin increase by replacing the single-channel analysis model with a multichannel analysis model. h new critical heat flux(CHF) correlation, which is applicable to a 17$\times$17 Korean Fuel Assembly(KOFA)-loaded core, was developed on the basis of the local conditions predicted by the subchannel analysis code, TORC. The hot sub-channel analysis was carried out by using one-stage analysis methodology with a prescribed nodal layout of the core. The result of the analysis shooed that more than 5% of the thermal margin can be recovered by introducing the TORC/KRB-1 system(multichannel analysis model) instead of the PUMA/ERB-2 system(single-channel anal)sis model). The thermal margin increase was attributed not only to the effect of the local thermal hydraulic conditions in the hot subchannel predicted by the code, but also to the effect of the characteristics of the CHF correlation.

  • PDF

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Estimation of Atmospheric Deposition Velocities and Fluxes from Weather and Ambient Pollutant Concentration Conditions : Part I. Application of multi-layer dry deposition model to measurements at north central Florida site

  • Park, Jong-Kil;Eric R. Allen
    • Environmental Sciences Bulletin of The Korean Environmental Sciences Society
    • /
    • v.4 no.1
    • /
    • pp.31-42
    • /
    • 2000
  • The dry deposition velocities and fluxes of air pollutants such as SO2(g), O3(g), HNO3(g), sub-micron particulates, NO3(s), and SO42-(s) were estimated according to local meteorological elements in the atmospheric boundary layer. The model used for these calculations was the multiple layer resistance model developed by Hicks et al.1). The meteorological data were recorded on an hourly basis from July, 1990 to June, 1991 at the Austin Cary forest site, near Gainesville FL. Weekly integrated samples of ambient dry deposition species were collected at the site using triple-fiter packs. For the study period, the annual average dry deposition velocities at this site were estimated as 0.87$\pm$0.07 cm/s for SO2(g), 0.65$\pm$0.11 cm/s for O3(g), 1.20$\pm$0.14cm/s for HNO3(g), 0.0045$\pm$0.0006 cm/s for sub-micron particulates, and 0.089$\pm$0.014 cm/s for NO3-(s) and SO42-(s). The trends observed in the daily mean deposition velocities were largely seasonal, indicated by larger deposition velocities for the summer season and smaller deposition velocities for the winter season. The monthly and weekly averaged values for the deposition velocities did not show large differences over the year yet did show a tendency of increased deposition velocities in the summer and decreased values in the winter. The annual mean concentrations of the air pollutants obtained by the triple filter pack every 7 days were 3.63$\pm$1.92 $\mu\textrm{g}$/m3 for SO42-, 2.00$\pm$1.22 $\mu\textrm{g}$/m-3 for SO2, 1.30$\pm$0.59 $\mu\textrm{g}$/m-3 for HNO3, and 0.704$\pm$0.419 $\mu\textrm{g}$/m3 for NO3-, respectively. The air pollutant with the largest deposition flux was SO2 followed by HNO3, SO42-(S), and NO3-(S) in order of their magnitude. The sulfur dioxide and NO3- deposition fluxes were higher in the winter than in the summer, and the nitric acid and sulfate deposition fluxes were high during the spring and summer.

  • PDF

Management of Knowledge Abstraction Hierarchy (지식 추상화 계층의 구축과 관리)

  • 허순영;문개현
    • Journal of the Korean Operations Research and Management Science Society
    • /
    • v.23 no.2
    • /
    • pp.131-156
    • /
    • 1998
  • Cooperative query answering is a research effort to develop a fault-tolerant and intelligent database system using the semantic knowledge base constructed from the underlying database. Such knowledge base has two aspects of usage. One is supporting the cooperative query answering Process for providing both an exact answer and neighborhood information relevant to a query. The other is supporting ongoing maintenance of the knowledge base for accommodating the changes in the knowledge content and database usage purpose. Existing studies have mostly focused on the cooperative query answering process but paid little attention on the dynamic knowledge base maintenance. This paper proposes a multi-level knowledge representation framework called Knowledge Abstraction Hierarchy (KAH) that can not only support cooperative query answering but also permit dynamic knowledge maintenance. The KAH consists of two types of knowledge abstraction hierarchies. The value abstraction hierarchy is constructed by abstract values that are hierarchically derived from specific data values in the underlying database on the basis of generalization and specialization relationships. The domain abstraction hierarchy is built on the various domains of the data values and incorporates the classification relationship between super-domains and sub-domains. On the basis of the KAH, a knowledge abstraction database is constructed on the relational data model and accommodates diverse knowledge maintenance needs and flexibly facilitates cooperative query answering. In terms of the knowledge maintenance, database operations are discussed for the cases where either the internal contents for a given KAH change or the structures of the KAH itself change. In terms of cooperative query answering, database operations are discussed for both the generalization and specialization Processes, and the conceptual query handling. A prototype system has been implemented at KAIST that demonstrates the usefulness of KAH in ordinary database application systems.

  • PDF

A Case Study on the Application of Systems Engineering to the Development of PHWR Core Management Support System (시스템엔지니어링 기법을 적용한 가압중수로 노심관리 지원시스템 개발 사례)

  • Yeom, Choong Sub;Kim, Jin Il;Song, Young Man
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.9 no.1
    • /
    • pp.33-45
    • /
    • 2013
  • Systems Engineering Approach was applied to the development of operator-support core management system based on the on-site operation experience and document of core management procedures, which is for enhancing operability and safety in PHWR (Pressurized Heavy Water Reactor) operation. The dissertation and definition of the system were given on th basis of investigating and analyzing the core management procedures. Fuel management, detector calibration, safety management, core power distribution monitoring, and integrated data management were defined as main user's requirements. From the requirements, 11 upper functional requirements were extracted by considering the on-site operation experience and investigating documents of core management procedures. Detailed requirements of the system which were produced by analyzing the upper functional requirements were identified by interviewing members who have responsibility of the core management procedures, which were written in SRS (Software Requirement Specification) document by using IEEE 830 template. The system was designed on the basis of the SRS and analysis in terms of nuclear engineering, and then tested by simulation using on-site data as a example. A model of core power monitoring related to the core management was suggested and a standard process for the core management was also suggested. And extraction, analysis, and documentation of the requirements were suggested as a case in terms of systems engineering.

Construction of Information Packages for the Operational Efficiency of Dark Archives (다크 아카이브 운영 효율화를 위한 정보패키지 구축)

  • Park, Hyoeun;Lee, Seungmin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.54 no.4
    • /
    • pp.261-281
    • /
    • 2020
  • The importance of long-term preservation of various types of electronic records through dark archives is gradually increasing. However, the current dark archive does not have an optimized information package structure for long-term preservation of electronic records. In order to address these problems, this research proposed four element categories by re-organizing the OAIS reference model information package based on the core process of the dark archiving. The detailed descriptive items of each category consist of a total of 4 upper-level elements and 27 sub elements based on the OAIS reference model, ISO 23081, Records Management Metadata Standard, ISAD(G), ISAAR(CPF), ISDF, and ISDIAH. This structure can be used as a basis for constructing an information package optimized for dark archiving, and is expected to support the long-term preservation of electronic records more efficiently.

A Study on the Design of Controller for Speed Control of the Induction Motor in the Train Propulsion System-2 (열차추진시스템에서 유도전동기의 속도제어를 위한 제어기 설계에 대한 연구-2)

  • Lee, Jung-Ho;Kim, Min-Seok;Lee, Jong-Woo
    • Journal of the Korean Society for Railway
    • /
    • v.13 no.2
    • /
    • pp.166-172
    • /
    • 2010
  • Currently, vector control is used for speed control of trains because induction motor has high performance is installed in Electric railroad systems. Also, control of the induction motor is possible through various methods by developing inverters and control theory. Presently, rolling stocks which use the induction motor are possible to brake trains by using AC motor. Therefore model of motor block and induction motor is needed to adapt various methods. There is Variable Voltage Variable Frequency (VVVF) as the control method of the induction motor. The torque and speed is controlled in the VVVF. The propulsion system model in the electric railroad has many sub-systems. So, the analysis of performance of the speed control is very complex. In this paper, simulation models are suggested by using Matlab/Simulink in the speed control characteristic. On the basis of the simulation models, the response to disturbance input is analyzed about the load. Also, the current, speed and flux control model are proposed to analyze the speed control characteristic in the train propulsion system.

Non-linear Maneuvering Target Tracking Method Using PIP (PIP 개념을 이용한 비선형 기동 표적 추적 기법)

  • Son, Hyun-Seung;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.136-142
    • /
    • 2007
  • This paper proposes a new approach on nonlinear maneuvering target tracking. In this paper, proposed algorithm is the Kalman filter based on the adaptive interactive multiple model using the concept of predicted impact point and utilize modified Kalman filter regarding the error between measurement position and predicted impact point. The unknown target acceleration is regarded as an additional process noise to the target model, and each sub-model is characterized in accordance with the valiance of the overall process noise which is obtained on the basis of each acceleration interval. To compensate the decreasing performance of Kalman filter in nonlinear maneuver, we construct optional algorithm to utilize proposed method or Kalman filter selectively. To effectively estimate the acceleration during the target maneuvering, the rapid increase of the noise scale is recognized as the acceleration to be used in maneuvering target's movement equation. And a few examples are presented to show suggested algorithm's executional potential.