• Title/Summary/Keyword: Model Repository

Search Result 299, Processing Time 0.03 seconds

Equilibrium Concentration of Radionuclides in Cement/Groundwater/Carbon Steel System

  • Keum, D.K.;Cho, W.J.;Hahn, P.S.
    • Nuclear Engineering and Technology
    • /
    • v.29 no.2
    • /
    • pp.127-137
    • /
    • 1997
  • Equilibrium concentrations of major elements in an underground repository with a capacity of 100,000 drums have been simulated using the geochemical computer code (EQMOD). The simulation has been carried out at the conditions of pH 12 to 13.5, and Eh 520 and -520 mV. Solubilities of magnesium and calcium decrease with the increase of pH. The solubility of iron increases with pH at Eh -520 mV of reducing environment while it almost entirely exists as the precipitate of Fe(OH)$_3$(s) at Eh 520 mV of oxidizing environment. All of cobalt and nickel are predicted to be dissolved in the liquid phase regardless of pH since the solubility limit is greater than the total concentration. In the case of cesium and strontium, all forms of both ions are present in the liquid phase because they have negligible sorption capacity on cement and large solubility under disposal atmosphere. And thus the total concentration determines the equilibrium concentration. Adsorbed amount of iodide and carbonate are dependent on adsorption capacity and adsorption equilibrium constant. Especially, the calcite turns out to be a solubility-limiting phase on the carbonate system. In order to validate the model, the equilibrium concentrations measured for a number of systems which consist of iron, cement, synthetic groundwater and radionuclides are compared with those predicted by the model. The concentrations between the model and the experiment of nonadsorptive elements cesium, strontium, cobalt nickel and iron, are well agreed. It indicates that the assumptions and the thermodynamic data in this work are valid. Using the adsorption equilibrium constant as a free parameter, the experimental data of iodide and carbonate have been fitted to the model. The model is in a good agreement with the experimental data of the iodide system.

  • PDF

Conceptual Geochemical Modelling of Long-term Hyperalkaline Groundwater and Rock Interaction (지구화학 모델을 이용한 장기간의 강알칼리성 지하수-암석의 반응 개념 모델링)

  • Choi, Byoung-Young;Yoo, Si-Won;Chang, Kwang-Soo;Kim, Geon-Young;Koh, Yong-Kwon;Choi, Jong-Won
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.5 no.4
    • /
    • pp.273-281
    • /
    • 2007
  • Hyperalkaline groundwater formed by groundwater-cement components and its reaction with bedrock in a nuclear waste repository were simulated by geochemical modeling. The result of groundwater-cement components reaction showed that the pH of water was 13.3 and the precipitated minerals were Brucite, Katoite, Calcium Silicate Hydrate(CSH1.1), Ettringite, Hematite, and Portlandite. The result of interaction between such minerals and groundwater sampled in Gyeongju area also showed that the pH of groundwater reached 12.4. Interaction between such hyperalkaline groundwater and granite was simulated by kinetic model during $10^3$ years. This result showed that the final pH of groundwater reached 11.2 and the variation of pH was controlled by dissolution/precipitation of silicate and CSH minerals. Groundwater quality was also determined by dissolution/precipitation of silicate, CSH, oxide minerals. Our results show that geochemical modeling of long-term hyperalkaline groundwater and rock interaction can contribute to the safety assessment of engineered barrier by predicting geochemical condition in repository site.

  • PDF

A Prediction of Specific Heat Capacity for Compacted Bentonite Buffer (압축 벤토나이트 완충재의 비열 추정)

  • Yoon, Seok;Kim, Geon-Young;Baik, Min-Hoon
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.15 no.3
    • /
    • pp.199-206
    • /
    • 2017
  • A geological repository for the disposal of high-level radioactive waste is generally constructed in host rock at depths of 500~1,000 meters below the ground surface. A geological repository system consists of a disposal canister with packed spent fuel, buffer material, backfill material, and intact rock. The buffer is indispensable to assure the disposal safety of high-level radioactive waste, and it can restrain the release of radionuclides and protect the canister from the inflow of groundwater. Since high temperature in a disposal canister is released to the surrounding buffer material, the thermal properties of the buffer material are very important in determining the entire disposal safety. Even though there have been many studies on thermal conductivity, there have been only few studies that have investigates the specific heat capacity of the bentonite buffer. Therefore, this paper presents a specific heat capacity prediction model for compacted Gyeongju bentonite buffer material, which is a Ca-bentonite produced in Korea. Specific heat capacity of the compacted bentonite buffer was measured using a dual probe method according to various degrees of saturation and dry density. A regression model to predict the specific heat capacity of the compacted bentonite buffer was suggested and fitted using 33 sets of data obtained by the dual probe method.

Review of Thermodynamic Sorption Model for Radionuclides on Bentonite Clay (벤토나이트와 방사성 핵종의 열역학적 수착 모델 연구)

  • Jeonghwan Hwang;Jung-Woo Kim;Weon Shik Han;Won Woo Yoon;Jiyong Lee;Seonggyu Choi
    • Economic and Environmental Geology
    • /
    • v.56 no.5
    • /
    • pp.515-532
    • /
    • 2023
  • Bentonite, predominantly consists of expandable clay minerals, is considered to be the suitable buffering material in high-level radioactive waste disposal repository due to its large swelling property and low permeability. Additionally, the bentonite has large cation exchange capacity and specific surface area, and thus, it effectively retards the transport of leaked radionuclides to surrounding environments. This study aims to review the thermodynamic sorption models for four radionuclides (U, Am, Se, and Eu) and eight bentonites. Then, the thermodynamic sorption models and optimized sorption parameters were precisely analyzed by considering the experimental conditions in previous study. Here, the optimized sorption parameters showed that thermodynamic sorption models were related to experimental conditions such as types and concentrations of radionuclides, ionic strength, major competing cation, temperature, solid-to-liquid ratio, carbonate species, and mineralogical properties of bentonite. These results implied that the thermodynamic sorption models suggested by the optimization at specific experimental conditions had large uncertainty for application to various environmental conditions.

A Hybrid SVM Classifier for Imbalanced Data Sets (불균형 데이터 집합의 분류를 위한 하이브리드 SVM 모델)

  • Lee, Jae Sik;Kwon, Jong Gu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.125-140
    • /
    • 2013
  • We call a data set in which the number of records belonging to a certain class far outnumbers the number of records belonging to the other class, 'imbalanced data set'. Most of the classification techniques perform poorly on imbalanced data sets. When we evaluate the performance of a certain classification technique, we need to measure not only 'accuracy' but also 'sensitivity' and 'specificity'. In a customer churn prediction problem, 'retention' records account for the majority class, and 'churn' records account for the minority class. Sensitivity measures the proportion of actual retentions which are correctly identified as such. Specificity measures the proportion of churns which are correctly identified as such. The poor performance of the classification techniques on imbalanced data sets is due to the low value of specificity. Many previous researches on imbalanced data sets employed 'oversampling' technique where members of the minority class are sampled more than those of the majority class in order to make a relatively balanced data set. When a classification model is constructed using this oversampled balanced data set, specificity can be improved but sensitivity will be decreased. In this research, we developed a hybrid model of support vector machine (SVM), artificial neural network (ANN) and decision tree, that improves specificity while maintaining sensitivity. We named this hybrid model 'hybrid SVM model.' The process of construction and prediction of our hybrid SVM model is as follows. By oversampling from the original imbalanced data set, a balanced data set is prepared. SVM_I model and ANN_I model are constructed using the imbalanced data set, and SVM_B model is constructed using the balanced data set. SVM_I model is superior in sensitivity and SVM_B model is superior in specificity. For a record on which both SVM_I model and SVM_B model make the same prediction, that prediction becomes the final solution. If they make different prediction, the final solution is determined by the discrimination rules obtained by ANN and decision tree. For a record on which SVM_I model and SVM_B model make different predictions, a decision tree model is constructed using ANN_I output value as input and actual retention or churn as target. We obtained the following two discrimination rules: 'IF ANN_I output value <0.285, THEN Final Solution = Retention' and 'IF ANN_I output value ${\geq}0.285$, THEN Final Solution = Churn.' The threshold 0.285 is the value optimized for the data used in this research. The result we present in this research is the structure or framework of our hybrid SVM model, not a specific threshold value such as 0.285. Therefore, the threshold value in the above discrimination rules can be changed to any value depending on the data. In order to evaluate the performance of our hybrid SVM model, we used the 'churn data set' in UCI Machine Learning Repository, that consists of 85% retention customers and 15% churn customers. Accuracy of the hybrid SVM model is 91.08% that is better than that of SVM_I model or SVM_B model. The points worth noticing here are its sensitivity, 95.02%, and specificity, 69.24%. The sensitivity of SVM_I model is 94.65%, and the specificity of SVM_B model is 67.00%. Therefore the hybrid SVM model developed in this research improves the specificity of SVM_B model while maintaining the sensitivity of SVM_I model.

Modeling Study on Nuclide Transport in Ocean - an Ocean Compartment Model (해양에서의 핵종이동 모델링 - 해양구획 모델)

  • Lee, Youn-Myoung;Suh, Kyung-Suk;Han, Kyong-Won
    • Nuclear Engineering and Technology
    • /
    • v.23 no.4
    • /
    • pp.387-400
    • /
    • 1991
  • An ocean compartment model simulating transport of nuclides by advection due to ocean circulation and intertaction with suspended sediments is developed, by which concentration breakthrough curves of nuclides can be calculated as a function of time. Dividing ocean into arbitrary number of characteristic compartments and performing a balance of mass of nuclides in each ocean compartment, the governing equation for the concentration in the ocean is obtained and a solution by the numerical integration is obtained. The integration method is specially useful for general stiff systems. For transfer coefficients describing advective transport between adjacent compartments by ocean circulation, the ocean turnover time is calculated by a two-dimensional numerical ocean model. To exemplify the compartment model, a reference case calculation for breakthrough curves of three nuclides in low-level radioactive wastes, Tc-99, Cs-137, and Pu-238 released from hypothetical repository under the seabed is carried out with five ocean compartments. Sensitivity analysis studies for some parameters to the concentration breakthrough curves are also made, which indicates that parameters such as ocean turnover time and ocean water volume of compartments have an important effect on the breakthrough curves.

  • PDF

Development of Telecommunication Network Management Agents using Farmer Model on Distributed System (분산 시스템 상에서 Farmer Model을 이용한 통신망 관리 에이전트 개발)

  • Lee, Gwang-Hyeong;Park, Su-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.9
    • /
    • pp.2493-2503
    • /
    • 1999
  • The TMN that appears to operate the various communication networks generally and efficiently is developed under the different platform environment such as the different hardware and the different operating system. One of the main problems is that all the agents of the TMN system must be duplicated and maintain the software and the data blocks that perform the identical function. Therefore, the multi-platform cannot be supported in the development of the TMN agent. In order to overcome these problems, the Farming methodology that is based on the Farmer model has been suggested. With the Farming methodology, the software and the data components which are duplicated and stored in each distributed object are saved in the platform independent class repository (PICR) by converting into the format of the independent componentware in the platform, so that the componentwares that are essential for the execution can be loaded and used statically or dynamically from PICR as described in the framework of each distributed object. The distributed TMN agent of the personal communication network is designed and developed by using the Farmer model.

  • PDF

Semantic Clustering Model for Analytical Classification of Documents in Cloud Environment (클라우드 환경에서 문서의 유형 분류를 위한 시맨틱 클러스터링 모델)

  • Kim, Young Soo;Lee, Byoung Yup
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.11
    • /
    • pp.389-397
    • /
    • 2017
  • Recently semantic web document is produced and added in repository in a cloud computing environment and requires an intelligent semantic agent for analytical classification of documents and information retrieval. The traditional methods of information retrieval uses keyword for query and delivers a document list returned by the search. Users carry a heavy workload for examination of contents because a former method of the information retrieval don't provide a lot of semantic similarity information. To solve these problems, we suggest a key word frequency and concept matching based semantic clustering model using hadoop and NoSQL to improve classification accuracy of the similarity. Implementation of our suggested technique in a cloud computing environment offers the ability to classify and discover similar document with improved accuracy of the classification. This suggested model is expected to be use in the semantic web retrieval system construction that can make it more flexible in retrieving proper document.

A Study on a Model Collection Development Policy for Children and Young Adults Libraries - with a special reference to the National Library for Children and Young Adults, Korea - (어린이청소년도서관 장서개발정책 모형 연구 - 국립어린이청소년도서관을 중심으로 -)

  • Chang, Durk-Hyun;Lee, Yeon-Ok;Yoon, Hee-Yoon
    • Journal of Korean Library and Information Science Society
    • /
    • v.45 no.2
    • /
    • pp.179-203
    • /
    • 2014
  • The main objective of this study is to set a Collection Development Policy(CDP) for the National Library for Children and Young Adults(NLCY). Every library, in order to accomplish its duty, should maintain a written collection development policy. This paper, in this regard, strives to propose a model to comprise of basic principles of collection development of NLCY for the effective management of resources by the types and subjects. Major emphasis was put on the nature of the library as a repository and a service point for children and young adults, and a hub for children-related research. As a result, a model collection development policy appropriate for NLCY has been proposed by analyzing cases in other countries to guide establishing principles for determining and analyzing the types and magnitude of the collection acquisition.

Ontology Selection Ranking Model based on Semantic Similarity Approach (의미적 유사성에 기반한 온톨로지 선택 랭킹 모델)

  • Oh, Sun-Ju;Ahn, Joong-Ho;Park, Jin-Soo
    • The Journal of Society for e-Business Studies
    • /
    • v.14 no.2
    • /
    • pp.95-116
    • /
    • 2009
  • Ontologies have provided supports in integrating heterogeneous and distributed information. More and more ontologies and tools have been developed in various domains. However, building ontologies requires much time and effort. Therefore, ontologies need to be shared and reused among users. Specifically, finding the desired ontology from an ontology repository will benefit users. In the past, most of the studies on retrieving and ranking ontologies have mainly focused on lexical level supports. In those cases, it is impossible to find an ontology that includes concepts that users want to use at the semantic level. Most ontology libraries and ontology search engines have not provided semantic matching capability. Retrieving an ontology that users want to use requires a new ontology selection and ranking mechanism based on semantic similarity matching. We propose an ontology selection and ranking model consisting of selection criteria and metrics which are enhanced in semantic matching capabilities. The model we propose presents two novel features different from the previous research models. First, it enhances the ontology selection and ranking method practically and effectively by enabling semantic matching of taxonomy or relational linkage between concepts. Second, it identifies what measures should be used to rank ontologies in the given context and what weight should be assigned to each selection measure.

  • PDF