• Title/Summary/Keyword: 사건기반 평가방법론

Search Result 19, Processing Time 0.025 seconds

A Validation Check of Simulation Model with the Model Transformation (모델변환에 의한 시뮬레이션 모델의 타당성 검사)

  • 정영식
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1992.10a
    • /
    • pp.9-9
    • /
    • 1992
  • 시뮬레이션(simulation)은 실 시스템(real system)의 효과적이고 효율적인 운영을 도모하기 위하여 실 시스템의 동작을 이해하고 분석, 예측, 평가하는 과학적인 문제해결 접근방법이다. 시뮬레이션 수행단계는 실 시스템의 행위를 정확히 반영하도록 타당한 모델을 구축하는 모델링 단계와 모델에 의도하는 명령어들을 컴퓨터 프로그램으로 작성하는 구현단계로 나누어진다. 시뮬레이션 모델은 시간, 상태, 확률변수, 상호규칙 등의 여러 관점에 따라 다양하게 존재하는데, DEVS(Descrete EVent system Specification) 모델은 연속적인 시간상에서 이산적으로 발생하는 사건에 따라 시스템의 상태를 분석할 수 있고 모델링 및 시뮬레이션 방법론의 형식화를 위한 견고한 이론적 기반을 제공하고 있다. 또한, DEVS 모델은 모듈적, 계층적 특성을 제공하고 집합론에 근거한 수학적 형식구조를 제공하여 실 시스템에 대한 체계적인 분석과정을 수행하게 되어 보다 현실적인 모델링을 가능하게 한다. 그러나 타당하지 못한 DEVS 모델이 구축되면 시뮬레이션을 통한 분석결과의 신뢰성이 떨어져 아무런 효과가 없고 경제적인 손실만이 따른다. DEVS 모델에 대한 기존의 타당성 검사가 많은 시간과 노력이 요구되고, 반복적인 DEVS 모델링 과정으로 인한 전문적이고 경험적인 지식을 요구한다. 또한, 모델설계자에 의해 설정된 실험 프레임하에서 DEVS 모델의 구성요소에 속하는 상태전이함수, 시간진행함수 및 출력함수에 대하여 commutative 성질의 보전성 검사가 어렵다는 문제점을 가지고 있다. 본 연구에서는 이와 같은 문제점을 해결하기 위하여, DEVS 모델에 대한 타당성 검사를 SPN(Stochastic Petri Net) 모델로 변환하여 SPN 모델을 이용하는 간단하고 효과적인 타당성 검사 방법을 제안한다. 먼저, DEVs 모델에 대한 개념과 기존의 DEVS 모델에 대한 타당성 검사 방법을 고찰하고 그 문제점에 대하여 자세히 설명한다. DEVS 모델의 타당성 검사에 이용하는 SPN 모델에 대한 개념과 DEVS 모델과 행위적으로 동등한 SNP 모델로 변환을 위한 관점을 제조명하다. 동일한 관점에서 두 모델의 상태표현이 같도록 DEVS 모델이 SPN 모델로 표현됨을 보이는 변환이론을 제시하고 변환이론을 바탕으로 모델 변환과정을 제시한다. 모델 변환이론과 변환고정을 기본으로 타당성 검사를 위한 새로운 동질함수(homogeneous function)를 정의하고 이와 함께 SPN 모델의 특성을 이용하여 DEVS 모델에 대한 타당성 검사 방법을 새롭게 제안한다.

  • PDF

Trends of Assessment Research in Science Education (과학 교육에서의 평가 연구 동향)

  • Chung, Sue-Im;Shin, Dong-Hee
    • Journal of The Korean Association For Science Education
    • /
    • v.36 no.4
    • /
    • pp.563-579
    • /
    • 2016
  • This study seeks educational implication by analyzing research papers dealing with science assessment in the most recent 30 years in Korea. The main purpose of the study is to analyze the trends in published papers on science assessment, their purpose, methodology, and key words, especially concentrating on the cognitive and affective domains. We selected 273 research articles and categorized them by research object, subject, methodology, and contents. To examine the factors that affect the research trend, we also tried to contextualize papers' theme in terms of changes in national curriculum and assessment system during the contemporary period. As a result, an overall research trend reflects changes in science curriculum and assessment events such as implementation of college scholastic ability test or performance assessment. There is an unequal distribution in various aspects of the researches, showing a superiority in cognitive domains than the affective ones. By using standardized data obtained through the national and international assessment of educational achievement in science, quantitative researches were superior to qualitative ones. Studies on cognitive domain use variously written- and performance-based tests, whereas most studies of the affective ones prefer written tests. Applied research and evaluation research are predominant comparing to basic ones, which most of the research methodology is based on statistics. Lastly, we found out that key words and subjects tend to be subdivided and detailed rather than general and comprehensive, as time goes on. Such trend will be helpful to elaborate and refine assessment tools that have been regarded as a problem.

Assessment factors for the Selection of Priority Soil Contaminants based on the Comparative Analysis of Chemical Ranking and Scoring Systems (국내.외 Chemical Ranking and Scoring 체계 비교분석을 통한 우선순위 토양오염물질 선정을 위한 평가인자 도출)

  • An, Youn-Joo;Jeong, Seung-Woo;Kim, Tae-Seung;Lee, Woo-Mi;Nam, Sun-Hwa;Baek, Yong-Wook
    • Journal of Soil and Groundwater Environment
    • /
    • v.13 no.6
    • /
    • pp.62-71
    • /
    • 2008
  • Soil quality standards (SQS) are necessary to protect the human health and soil biota from the exposure to soil pollutants. The current SQS in Korea contain only sixteen substances, and it is scheduled to expand the number of substances. Chemical ranking and scoring (CRS) system is very effective to screen the priority chemicals for the future SQS in terms of their toxicity and exposure potential. In this study, several CRS systems were extensively compared to propose the assessment factors that required for the screening of soil pollutants The CRS systems considered in this study include the CHEMS-1 (Chemical Hazard Evaluation for Management Strategies), SCRAM (Scoring and Ranking Assessment Model), EURAM (European Union Risk Ranking Method), ARET (Accelerated Reduction/Elimination of Toxics), CRSKorea, and other systems. The additional assessment factors of CRS suitable for soil pollutants were suggested. We suggest soil adsorption factor as an appropriate factor of CRS system to consider chemical transport from soil to groundwater. Other factors such as soil emission rate and cases of accident of soil pollutants were included. These results were reflected to screen the priority chemicals in Korea, as a part of the project entitled ‘Setting the Priority of Soil Contaminants'.

Safety Techniques-Based Improvement of Task Execution Process Followed by Execution Maturity-Based Risk Management in Precedent Research Stage of Defense R&D Programs (국방 선행연구단계에서 안전분석 기법에 기반한 수행프로세스의 개선 및 수행성숙도 평가를 활용한 위험 관리)

  • Choi, Se Keun;Kim, Young-Min;Lee, Jae-Chon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.19 no.10
    • /
    • pp.89-100
    • /
    • 2018
  • The precedent study stage of defense programs is a project stage that is conducted to support the determination of an efficient acquisition method of the weapon system determined by the requirement. In this study, the FTA/FMEA technique was used in the safety analysis process to identify elements to be conducted in the precedent study stage and a methodology for deriving the key review elements through conceptualization and tailoring was suggested. To supplement the key elements derived from the existing research, it is necessary to analyze various events that may arise from key elements. To accomplish this, the HAZOP technique for safety analysis in other industrial fields was used to supplement the results of kdy element derivation. We analyzed and modeled the execution procedure by establishing input/output information and association with the key elements of the precedent study stage derived by linking HAZOP/FTA/FMEA techniques. In addition, performance maturity was evaluated for performance of precedent study, and a risk-based response manual was generated based on inter-working information with key elements with low maturity. Based on the results of this study, it is possible to meet the performance, cost, and schedule of the project implementation through application of the key elements and procedures and the risk management response manual in the precedent study stage of the defense program.

A Study on the Reduction of Waiting Time and Moving Distance through Optimal Allocation of Service Space in a Health Examination Center (건강검진센터의 공간서비스 적정할당을 통한 대기시간 및 이동거리 단축에 관한 연구)

  • Kim, Suk-Tae;Oh, Sung-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.12
    • /
    • pp.167-175
    • /
    • 2019
  • Recently, health examination centers have been changing from auxiliary medical facilities to key and independent medical facilities. However, it is not easy to improve medical facilities, including health examination centers, due to the variable characteristics of the relationship between humans and space. Therefore, this study was done to develop a pedestrian-based discrete event simulation analysis program to examine the problems and develop methods for improvement. The program was developed to analyze five evaluation indices and the density of examinees. The problems were derived by analyzing the required time, capacity, and queue size for each examination through simulations. We reduced the examination time and moving distance, increased the capacity, and distributed the queues by adjusting the medical services and relocating the examination rooms. The results were then quantitatively verified by simulations.

Machine learning-based corporate default risk prediction model verification and policy recommendation: Focusing on improvement through stacking ensemble model (머신러닝 기반 기업부도위험 예측모델 검증 및 정책적 제언: 스태킹 앙상블 모델을 통한 개선을 중심으로)

  • Eom, Haneul;Kim, Jaeseong;Choi, Sangok
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.105-129
    • /
    • 2020
  • This study uses corporate data from 2012 to 2018 when K-IFRS was applied in earnest to predict default risks. The data used in the analysis totaled 10,545 rows, consisting of 160 columns including 38 in the statement of financial position, 26 in the statement of comprehensive income, 11 in the statement of cash flows, and 76 in the index of financial ratios. Unlike most previous prior studies used the default event as the basis for learning about default risk, this study calculated default risk using the market capitalization and stock price volatility of each company based on the Merton model. Through this, it was able to solve the problem of data imbalance due to the scarcity of default events, which had been pointed out as the limitation of the existing methodology, and the problem of reflecting the difference in default risk that exists within ordinary companies. Because learning was conducted only by using corporate information available to unlisted companies, default risks of unlisted companies without stock price information can be appropriately derived. Through this, it can provide stable default risk assessment services to unlisted companies that are difficult to determine proper default risk with traditional credit rating models such as small and medium-sized companies and startups. Although there has been an active study of predicting corporate default risks using machine learning recently, model bias issues exist because most studies are making predictions based on a single model. Stable and reliable valuation methodology is required for the calculation of default risk, given that the entity's default risk information is very widely utilized in the market and the sensitivity to the difference in default risk is high. Also, Strict standards are also required for methods of calculation. The credit rating method stipulated by the Financial Services Commission in the Financial Investment Regulations calls for the preparation of evaluation methods, including verification of the adequacy of evaluation methods, in consideration of past statistical data and experiences on credit ratings and changes in future market conditions. This study allowed the reduction of individual models' bias by utilizing stacking ensemble techniques that synthesize various machine learning models. This allows us to capture complex nonlinear relationships between default risk and various corporate information and maximize the advantages of machine learning-based default risk prediction models that take less time to calculate. To calculate forecasts by sub model to be used as input data for the Stacking Ensemble model, training data were divided into seven pieces, and sub-models were trained in a divided set to produce forecasts. To compare the predictive power of the Stacking Ensemble model, Random Forest, MLP, and CNN models were trained with full training data, then the predictive power of each model was verified on the test set. The analysis showed that the Stacking Ensemble model exceeded the predictive power of the Random Forest model, which had the best performance on a single model. Next, to check for statistically significant differences between the Stacking Ensemble model and the forecasts for each individual model, the Pair between the Stacking Ensemble model and each individual model was constructed. Because the results of the Shapiro-wilk normality test also showed that all Pair did not follow normality, Using the nonparametric method wilcoxon rank sum test, we checked whether the two model forecasts that make up the Pair showed statistically significant differences. The analysis showed that the forecasts of the Staging Ensemble model showed statistically significant differences from those of the MLP model and CNN model. In addition, this study can provide a methodology that allows existing credit rating agencies to apply machine learning-based bankruptcy risk prediction methodologies, given that traditional credit rating models can also be reflected as sub-models to calculate the final default probability. Also, the Stacking Ensemble techniques proposed in this study can help design to meet the requirements of the Financial Investment Business Regulations through the combination of various sub-models. We hope that this research will be used as a resource to increase practical use by overcoming and improving the limitations of existing machine learning-based models.

Bundled Discounting of Healthcare Services and Restraint of Competition (의료서비스의 결합판매와 경쟁제한성의 판단 - Cascade Health 사건을 중심으로 -)

  • Jeong, Jae Hun
    • The Korean Society of Law and Medicine
    • /
    • v.20 no.3
    • /
    • pp.175-209
    • /
    • 2019
  • The bundled discounting which the dominant undertakings engage in is problematic in terms of competition restraint. Bundled discounts generally benefit not only buyers but also sellers. Specifically, bundled discounts usually costs a firm less to sell multiple products. In addition, Bundled discounts always provide some immediate consumer benefit in the form of lower prices. Therefore, competition authorities and courts should not be too quick to condemn bundled discounts and apply the neutral and objective standard in bundled discounting cases. Cascade Health v. Peacehealth decision starts ruling from this prerequisite. This decision pointed out that the dominant undertaking can exclude rivals through bundled discounting without pricing its products below its cost when rivals do not sell as great a number of product lines. So bundled discounting may have the anticompetitive impact by excluding less diversified but more efficient producers. This decision did not adopt Lepage case's standard which does not require the court to consider whether the competitor was at least as efficient of a producer as the bundled discounter. Instead of that, based on cost based approach, this decision said that the exclusionary element can not be satisfied unless the discounts result in prices that are below an appropriate measures of the defendant's costs. By adopting a discount attribution standard, this decision said that the full amount of the discounts should be allocated to the competitive products. As the seller can easily ascertain its own prices and costs of production and calculate whether its discounting practices exclude competitors, not the competitor's costs but the dominant undertaking's costs should be considered in applying discount attribution standard. This case deals with bundled discounting practice of multiple healthcare services by the dominant undertaking in healthcare market. Under the Korean healthcare system and public health insurance system, the price competition primarily exists in non-medical care benefits because public healthcare insurance in Korea is in combination with the compulsory medical care institution system. The cases that Monopoly Regulation and Fair Trade Law deals with, such as cartel and the abuse of monopoly power, also mainly exist in non-medical care benefits. The dominant undertaking's exclusionary bundled discounting in Korean healthcare markets may be practiced in the contracts between the dominant undertaking and private insurance companies with regards to non-medical care benefits.

Development of GIS based Water Quality Simulation System for Han River and Kyeonggi Bay Area (한강과 경기만 지역 GIS 기반 통합수질모의 시스템 개발)

  • Lee, Chol-Young;Kim, Kye-Hyun
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.4
    • /
    • pp.77-88
    • /
    • 2008
  • There has been growing demands to manage the water quality of west coastal region due to the large scale urbanization along the coastal zone, the possibility of application of TMDL(Total Maximum Daily Loadings) to Han river, and the natural disaster such as oil spill incident in Taean, Chungnam. However, no system has been developed for such purposes. In this background, the demand of GIS based effective water quality management has been increased to monitor water quality environment and propose best management alternatives for Han river and Kyeonggi bay. This study mainly focused on the development of integrated water quality management system for Han river bas in and its estuary are a connected to Kyeonggi bay to support integrated water quality management and its plan. Integration was made based on GIS by spatial linking between water quality attributes and location information. A GIS DB was built to estimate the amount of generated and discharged water pollutants according to TMDL technical guide and it included input data to use two different water quality models--W ASP7 for Han river and EFDC for coastal area--to forecast water quality and to suggest BMP(Best management Practices). The results of BOD, TN, and TP from WASP7 were used as the input to run EFDC. Based on the study results, some critical areas which have relatively higher pollutant loadings were identified, and it was also identified that the locations discharging water pollutant loadings to river and seasonal factor affected water quality. And the relationship of water quality between river and its estuary area was quantitatively verified. The results showed that GIS based integrated system could be used as a tool for estimating status-quo of water quality and proposing economically effective BMPs to mitigate water pollution. Further studies need to be made for improving system's capabilities such as adding decision making function as well as cost-benefit analysis, etc. Also, the concrete methodology for water quality management using the system need to be developed.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.