• Title/Summary/Keyword: 유지관리 시스템

Search Result 3,388, Processing Time 0.033 seconds

Improvement of turbid water prediction accuracy using sensor-based monitoring data in Imha Dam reservoir (센서 기반 모니터링 자료를 활용한 임하댐 저수지 탁수 예측 정확도 개선)

  • Kim, Jongmin;Lee, Sang Ung;Kwon, Siyoon;Chung, Se Woong;Kim, Young Do
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.11
    • /
    • pp.931-939
    • /
    • 2022
  • In Korea, about two-thirds of the precipitation is concentrated in the summer season, so the problem of turbidity in the summer flood season varies from year to year. Concentrated rainfall due to abnormal rainfall and extreme weather is on the rise. The inflow of turbidity caused a sudden increase in turbidity in the water, causing a problem of turbidity in the dam reservoir. In particular, in Korea, where rivers and dam reservoirs are used for most of the annual average water consumption, if turbidity problems are prolonged, social and environmental problems such as agriculture, industry, and aquatic ecosystems in downstream areas will occur. In order to cope with such turbidity prediction, research on turbidity modeling is being actively conducted. Flow rate, water temperature, and SS data are required to model turbid water. To this end, the national measurement network measures turbidity by measuring SS in rivers and dam reservoirs, but there is a limitation in that the data resolution is low due to insufficient facilities. However, there is an unmeasured period depending on each dam and weather conditions. As a sensor for measuring turbidity, there are Optical Backscatter Sensor (OBS) and YSI, and a sensor for measuring SS uses equipment such as Laser In-Situ Scattering and Transmissometry (LISST). However, in the case of such a high-tech sensor, there is a limit due to the stability of the equipment. Therefore, there is an unmeasured period through analysis based on the acquired flow rate, water temperature, SS, and turbidity data, so it is necessary to develop a relational expression to calculate the SS used for the input data. In this study, the AEM3D model used in the Water Resources Corporation SURIAN system was used to improve the accuracy of prediction of turbidity through the turbidity-SS relationship developed based on the measurement data near the dam outlet.

Investigation on the Perception of Mandatory Clinical Practice in the Department of Radiology Following the Amendment of the Medical Technologists Act (의료기사 등에 관한 법률 개정으로 방사선(학)과 현장실습 의무화에 따른 인식 조사)

  • Jeong-Mu Lee;Yong-Ki Lee;Sung-Min Ahn
    • Journal of the Korean Society of Radiology
    • /
    • v.18 no.3
    • /
    • pp.293-300
    • /
    • 2024
  • On October 31, 2023, the revision of the Medical Technologist Act made it mandatory to complete field training courses in order to obtain a license as a radiologic technologist. Therefore, we would like to survey the actual situation of field training in medical institutions to inform the revised Medical Technologist Act and propose improvement measures to increase the effectiveness of field training. A survey was conducted from March to April, 2023, among radiologic technologists working in medical institutions. The questionnaire was sent through a form on a domestic portal site, Company N, and 120 respondents completed it. Eighty-two respondents, or 68.3 percent, had experience in educating on-the-job training students. 58% of the respondents were aware of the fact that the amendment to the Act on Medical Technologist etc. made field training mandatory to obtain a radiologic technologist license. In accordance with Article 9 of the Medical Technologist Act, which prohibits unlicensed persons from practicing, 50% of the respondents were aware that those who are in training to complete an education course equivalent to the license they are seeking to obtain at a university or other institution are allowed to practice as medical Technologists. When asked what is currently taught during fieldwork, 6% of respondents said that they are required to perform radiation-generating activities in addition to observing, guiding patients, and positioning and moving patients. When asked about the future direction of education as fieldwork becomes mandatory for licensure, 77% of respondents said that they will teach more than they currently do. When asked about the appropriate total length of fieldwork, 35% said 12 weeks and 480 hours, 33% said 8 weeks and 320 hours, and 27% said 16 weeks and 640 hours. It can be seen that the current on-the-job training is inadequate according to various regulations, and students' satisfaction is low. However, with the revision of the Act on Medical Technologists, field training has become mandatory to obtain a license as a radiologist, and it is necessary to improve the educational conditions of field training. Therefore, it is necessary to comply with the Nuclear Safety Act and the Rules on the Safety Management of Diagnostic Radiation Generating Devices, introduce standardized training objectives and evaluation systems, designate training hospitals and radiologists in charge of training, and introduce extended training periods and simulation exercises to internalize field training.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

The Market Segmentation of Coffee Shops and the Difference Analysis of Consumer Behavior: A Case based on Caffe Bene (커피전문점의 시장세분화와 소비자행동 차이 분석 : 카페베네 사례를 중심으로)

  • Yu, Jong-Pil;Yoon, Nam-Soo
    • Journal of Distribution Science
    • /
    • v.9 no.4
    • /
    • pp.5-13
    • /
    • 2011
  • This study provides analysis of the effectiveness of domestic marketing strategies of the Korean coffee shop "Caffe Bene". It bases its evaluation on statistical outputs of 'choice attributes,' "market segmentation," demographic characteristics," and "satisfaction differences." The results are summarized in four points. First, five choice attributes were extracted from factor analysis: price, atmosphere, comfort, taste, and location; these are related to coffee shop selection behavior. Based on these five factors, cluster analysis was conducted, with statistical results classifying customers into three major groups: atmosphere oriented; comfort oriented; and taste oriented. Second, discriminant analysis tested cluster analysis and showed two discriminant functions: location and atmosphere. Third, cross-tabulation analysis based on demographic characteristics showed distinctive demographic characteristics within the three groups. Atmosphere oriented group, early-20s, as women of all ages was found to be 'walking down the street 'and 'through acquaintances' in many cases, as the cognitive path, and mostly found the store through 'outdoor advertising', and 'introduction'. Comfort oriented group was mainly women who are students in their early twenties or professionals, and appeared as a group to be very loyal because of high recommendation to other customers compared to other groups. Taste oriented group, unlike the other group, was mainly late-20s' college graduates, and was confirmed, as low loyalty, with lower recommendation activity. Fourth, to analyze satisfaction differences, one-way ANOVA was conducted. It shows that groups which show high satisfaction in the five main factors also show high menu satisfaction and high overall satisfaction. This results show that segmented marketing strategies are necessary because customers are considering price, atmosphere, comfort, taste, location when they choose coffee shop and demographics show different attributes based on segmented groups. For example, atmosphere oriented group is satisfied with shop interior and comfort while dissatisfied with price because most of the customers in this group are early 20s and do not have great financial capability. Thus, price discounting marketing strategies based on individual situations through CRM system is critical. Comfort oriented group shows high satisfaction level about location and shop comfort. Also, in this group, there are many early 20s female customers, students, and self-employed people. This group customers show high word of mouth tendency, hence providing positive brand image to the customers would be important. In case of taste oriented group, while the scores of taste and location are high, word of mouth score is low. This group is mainly composed of educated and professional many late 20s customers, therefore, menu differentiation, increasing quality of coffee taste and price discrimination is critical to increase customers' satisfaction. However, it is hard to generalize the results of study to other coffee shop brand, because this study have researched only one domestic coffee shop, Caffe Bene. Thus if future study expand the scope of locations, brands, and occupations, the results of the study would provide more generalizable results. Finally, research of customer satisfactions of menu, trust, loyalty, and switching cost would be critical in the future study.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

The Advancement of Underwriting Skill by Selective Risk Acceptance (보험Risk 세분화를 통한 언더라이팅 기법 선진화 방안)

  • Lee, Chan-Hee
    • The Journal of the Korean life insurance medical association
    • /
    • v.24
    • /
    • pp.49-78
    • /
    • 2005
  • Ⅰ. 연구(硏究) 배경(背景) 및 목적(目的) o 우리나라 보험시장의 세대가입율은 86%로 보험시장 성숙기에 진입하였으며 기존의 전통적인 전업채널에서 방카슈랑스의 도입, 온라인전문보험사의 출현, TM 영업의 성장세 等멀티채널로 진행되고 있음 o LTC(장기간병), CI(치명적질환), 실손의료보험 등(等)선 진형 건강상품의 잇따른 출시로 보험리스크 관리측면에서 언더라이팅의 대비가 절실한 시점임 o 상품과 마케팅 等언더라이팅 측면에서 매우 밀접한 영역의 변화에 발맞추어 언더라이팅의 인수기법의 선진화가 시급히 요구되는 상황하에서 위험을 적절히 분류하고 평가하는 선진적 언더라이팅 기법 구축이 필수 적임 o 궁극적으로 고객의 다양한 보장니드 충족과 상품, 마케팅, 언더라이팅의 경쟁력 강화를 통한 보험사의 종합이익 극대화에 기여할 수 있는 방안을 모색하고자 함 Ⅱ. 선진보험시장(先進保險市場)Risk 세분화사례(細分化事例) 1. 환경적위험(環境的危險)에 따른 보험료(保險料) 차등(差等) (1) 위험직업 보험료 할증 o 미국, 유럽등(等) 대부분의 선진시장에서는 가입당시 피보험자의 직업위험도에 따라 보험료를 차등 적용중(中)임 o 가입하는 보장급부에 따라 직업 분류방법 및 할증방식도 상이하며 일반사망과 재해사망,납입면제, DI에 대해서 별도의 방법을 사용함 o 할증적용은 표준위험율의 일정배수를 적용하여 할증 보험료를 산출하거나, 가입금액당 일정한 추가보험료를 적용하고 있음 - 광부의 경우 재해사망 가입시 표준위험율의 300% 적용하며, 일반사망 가입시 $1,000당 $2.95 할증보험료 부가 (2) 위험취미 보험료 할증 o 취미와 관련 사고의 지속적 다발로 취미활동도 위험요소로 인식되어 보험료를 차등 적용중(中)임 o 할증보험료는 보험가입금액당 일정비율로 부가(가입 금액과 무관)하며, 신종레포츠 등(等)일부 위험취미는 통계의 부족으로 언더라이터가 할증율 결정하여 적용함 - 패러글라이딩 년(年)$26{\sim}50$회(回) 취미생활의 경우 가입금액 $1,000당 재해사망 $2, DI보험 8$ 할증보험료 부가 o 보험료 할증과는 별도로 위험취미에 대한 부담보를 적용함. 위험취미 활동으로 인한 보험사고 발생시 사망을 포함한 모든 급부에 대한 보장을 부(不)담보로 인수함. (3) 위험지역 거주/ 여행 보험료 할증 o 피보험자가 거주하고 있는 특정국가의 임시 혹은 영구적 거주시 기후위험, 거주지역의 위생과 의료수준, 여행위험, 전쟁과 폭동위험 등(等)을 고려하여 평가 o 일반사망, 재해사망 등(等)보장급부별로 할증보험료 부가 또는 거절 o 할증보험료는 보험全기간에 대해 동일하게 적용 - 러시아의 경우 가입금액 $1,000당 일반사망은 2$의 할증보험료 부가, 재해사망은 거절 (4) 기타 위험도에 대한 보험료 차등 o 비행관련 위험은 세가지로 분류(항공운송기, 개인비행, 군사비행), 청약서, 추가질문서, 진단서, 비행이력 정보를 바탕으로 할증보험료를 부가함 - 농약살포비행기조종사의 경우 가입금액 $1,000당 일반사망 6$의 할증보험료 부가, 재해사망은 거절 o 미국, 일본등(等)서는 교통사고나 교통위반 관련 기록을 활용하여 무(無)사고운전자에 대해 보험료 할인(우량체 위험요소로 활용) 2. 신체적위험도(身體的危險度)에 따른 보험료차등(保險料差等) (1) 표준미달체 보험료 할증 1) 총위험지수 500(초과위험지수 400)까지 인수 o 300이하는 25점단위, 300점 초과는 50점 단위로 13단계로 구분하여 할증보험료를 적용중(中)임 2) 삭감법과 할증법을 동시 적용 o 보험금 삭감부분만큼 할증보험료가 감소하는 효과가 있어 청약자에게 선택의 기회를 제공할수 있으며 고(高)위험 피보험자에게 유용함 3) 특정암에 대한 기왕력자에 대해 단기(Temporary)할증 적용 o 질병성향에 따라 가입후 $1{\sim}5$년간 할증보험료를 부가하고 보험료 할증 기간이 경과한 후에는 표준체보험료를 부가함 4) 할증보험료 반환옵션(Return of the extra premium)의 적용 o 보험계약이 유지중(中)이며, 일정기간 생존시 할증보험료가 반환됨 (2) 표준미달체 급부증액(Enhanced annuity) o 영국에서는 표준미달체를 대상으로 연금급부를 증가시킨 증액형 연금(Enhanced annuity) 상품을 개발 판매중(中)임 o 흡연, 직업, 병력 등(等)다양한 신체적, 환경적 위험도에 따라 표준체에 비해 증액연금을 차등 지급함 (3) 우량 피보험체 가격 세분화 o 미국시장에서는 $8{\sim}14$개 의적, 비(非)의적 위험요소에 대한 평가기준에 따라 표준체를 최대 8개 Class로 분류하여 할인보험료를 차등 적용 - 기왕력, 혈압, 가족력, 흡연, BMI, 콜레스테롤, 운전, 위험취미, 거주지, 비행력, 음주/마약 등(等) o 할인율은 회사, Class, 가입기준에 따라 상이(최대75%)하며, 가입연령은 최저 $16{\sim}20$세, 최대 $65{\sim}75$세, 최저보험금액은 10만달러(HIV검사가 필요한 최저 금액) o 일본시장에서는 $3{\sim}4$개 위험요소에 따라 $3{\sim}4$개 Class로 분류 우량체 할인중(中)임 o 유럽시장에서는 영국 등(等)일부시장에서만 비(非)흡연할인 또는 우량체할인 적용 Ⅲ. 국내보험시장(國內保險市場) 현황(現況)및 문제점(問題點) 1. 환경적위험도(環境的危險度)에 따른 가입한도제한(加入限度制限) (1) 위험직업 보험가입 제한 o 업계공동의 직업별 표준위험등급에 따라 각 보험사 자체적으로 위험등급별 가입한도를 설정 운영중(中)임. 비(非)위험직과의 형평성, 고(高)위험직업 보장 한계, 수익구조 불안정화 등(等)문제점을 내포하고 있음 - 광부의 경우 위험1급 적용으로 사망 최대 1억(億), 입원 1일(日) 2만원까지 제한 o 금융감독원이 2002년(年)7월(月)위험등급별 위험지수를 참조 위험율로 인가하였으나, 비위험직은 70%, 위험직은 200% 수준으로 산정되어 현실적 적용이 어려움 (2) 위험취미 보험가입 제한 o 해당취미의 직업종사자에 준(準)하여 직업위험등급을 적용하여 가입 한도를 제한하고 있음. 추가질문서를 활용하여 자격증 유무, 동호회 가입등(等)에 대한 세부정보를 입수하지 않음 - 패러글라이딩의 경우 위험2급을 적용, 사망보장 최대 2 억(億)까지 제한 (3) 거주지역/ 해외여행 보험가입 제한 o 각(各)보험사별로 지역적 특성상 사고재해 다발 지역에 대해 보험가입을 제한하고 있음 - 강원, 충청 일부지역 상해보험 가입불가 - 전북, 태백 일부지역 입원급여금 1일(日)2만원이내 o 해외여행을 포함한 해외체류에 대해서는 일정한 가입 요건을 정하여 운영중(中)이며, 가입한도 설정 보험가입을 제한하거나 재해집중보장 상품에 대해 거절함 - 러시아의 경우 단기체류는 위험1급 및 상해보험 가입 불가, 장기 체류는 거절처리함 2. 신체적위험도(身體的危險度)에 따른 인수차별화(引受差別化) (1) 표준미달체 인수방법 o 체증성, 항상성 위험에 대한 초과위험지수를 보험금삭감법으로 전환 사망보험에 적용(최대 5년(年))하여 5년(年)이후 보험 Risk노출 심각 o 보험료 할증은 일부 회사에서 주(主)보험 중심으로 사용중(中)이며, 총위험지수 300(8단계)까지 인수 - 주(主)보험 할증시 특약은 가입 불가하며, 암 기왕력자는 대부분 거절 o 신체부위 39가지, 질병 5가지에 대해 부담보 적용(입원, 수술 등(等)생존급부에 부담보) (2) 비(非)흡연/ 우량체 보험료 할인 o 1999년(年)최초 도입 이래 $3{\sim}4$개의 위험요소로 1개 Class 운영중(中)임 S생보사의 경우 비(非)흡연우량체, 비(非)흡연표준체의 2개 Class 운영 o 보험료 할인율은 회사, 상품에 따라 상이하며 최대 22%(영업보험료기준)임. 흡연여부는 뇨스틱을 활용 코티닌테스트를 실시함 o 우량체 판매는 신계약의 $2{\sim}15%$수준(회사의 정책에 따라 상이) Ⅳ. 언더라이팅 기법(技法) 선진화(先進化) 방안(方案) 1. 직업위험도별 보험료 차등 적용 o 생 손보 직업위험등급 일원화와 연계하여 3개등급으로 위험지수개편, 비위험직 기준으로 보험요율 차별적용 2. 위험취미에 대한 부담보 적용 o 해당취미를 원인으로 보험사고(사망포함) 발생시 부담보 제도 도입 3. 표준미달체 인수기법 선진화를 통한 인수범위 대폭 확대 o 보험료 할증법 적용 확대를 통한 Risk 헷지로 총위험지수 $300{\rightarrow}500$으로 확대(거절건 최소화) 4. 보험료 할증법 보험금 삭감 병행 적용 o 삭감기간을 적용한 보험료 할증방식 개발, 고객에게 선택권 제공 5. 기한부 보험료할증 부가 o 위암, 갑상선암 등(等)특정암의 성향에 따라 위험도가 높은 가입초기에 평준할증보험료를 적용하여 인수 6. 보험료 할증법 부가특약 확대 적용, 부담보 병행 사용 o 정기특약 등(等)사망관련 특약에 할증법 확대, 생존급부 특약은 부담보 7. 표준체 고객 세분화 확대 o 콜레스테롤, HDL 등(等)위험평가요소 확대를 통한 Class 세분화 Ⅴ. 기대효과(期待效果) 1. 고(高)위험직종사자, 위험취미자, 표준미달체에 대한 보험가입 문호개방 2. 보험계약자간 형평성 제고 및 다양한 고객의 보장니드에 부응 3. 상품판매 확대 및 Risk헷지를 통한 수입보험료 증대 및 사차익 개선 4. 본격적인 가격경쟁에 대비한 보험사 체질 개선 5. 회사 이미지 제고 및 진단 거부감 해소, 포트폴리오 약화 방지 Ⅵ. 결론(結論) o 종래의 소극적이고 일률적인 인수기법에서 탈피하여 피보험자를 다양한 측면에서 위험평가하여 적정 보험료 부가와 합리적 가입조건을 제시하는 적절한 위험평가 수단을 도입하고, o 언더라이팅 인수기법의 선진화와 함께 언더라이팅 인력의 전문화, 정보입수 및 시스템 인프라의 구축 등이 병행함으로써, o 보험사의 사차손익 관리측면에서 뿐만 아니라 보험시장 개방 및 급변하는 보험환경에 대비한 한국 생보언더라이팅 경쟁력 강화 및 언더라이터의 글로벌화에도 크게 기여할 것임.

  • PDF

무령왕릉보존에 있어서의 지질공학적 고찰

  • 서만철;최석원;구민호
    • Proceedings of the KSEEG Conference
    • /
    • 2001.05b
    • /
    • pp.42-63
    • /
    • 2001
  • The detail survey on the Songsanri tomb site including the Muryong royal tomb was carried out during the period from May 1 , 1996 to April 30, 1997. A quantitative analysis was tried to find changes of tomb itself since the excavation. Main subjects of the survey are to find out the cause of infiltration of rain water and groundwater into the tomb and the tomb site, monitoring of the movement of tomb structure and safety, removal method of the algae inside the tomb, and air controlling system to solve high humidity condition and dew inside the tomb. For these purposes, detail survery inside and outside the tombs using a electronic distance meter and small airplane, monitoring of temperature and humidity, geophysical exploration including electrical resistivity, geomagnetic, gravity and georadar methods, drilling, measurement of physical and chemical properties of drill core and measurement of groundwater permeability were conducted. We found that the center of the subsurface tomb and the center of soil mound on ground are different 4.5 meter and 5 meter for the 5th tomb and 7th tomb, respectively. The fact has caused unequal stress on the tomb structure. In the 7th tomb (the Muryong royal tomb), 435 bricks were broken out of 6025 bricks in 1972, but 1072 bricks are broken in 1996. The break rate has been increased about 250% for just 24 years. The break rate increased about 290% in the 6th tomb. The situation in 1996 is the result for just 24 years while the situation in 1972 was the result for about 1450 years. Status of breaking of bircks represents that a severe problem is undergoing. The eastern wall of the Muryong royal tomb is moving toward inside the tomb with the rate of 2.95 mm/myr in rainy season and 1.52 mm/myr in dry season. The frontal wall shows biggest movement in the 7th tomb having a rate of 2.05 mm/myr toward the passage way. The 6th tomb shows biggest movement among the three tombs having the rate of 7.44mm/myr and 3.61mm/myr toward east for the high break rate of bricks in the 6th tomb. Georadar section of the shallow soil layer represents several faults in the top soil layer of the 5th tomb and 7th tomb. Raninwater flew through faults tnto the tomb and nearby ground and high water content in nearby ground resulted in low resistance and high humidity inside tombs. High humidity inside tomb made a good condition for algae living with high temperature and moderate light source. The 6th tomb is most severe situation and the 7th tomb is the second in terms of algae living. Artificial change of the tomb environment since the excavation, infiltration of rain water and groundwater into the tombsite and bad drainage system had resulted in dangerous status for the tomb structure. Main cause for many problems including breaking of bricks, movement of tomb walls and algae living is infiltration of rainwater and groundwater into the tomb site. Therefore, protection of the tomb site from high water content should be carried out at first. Waterproofing method includes a cover system over the tomvsith using geotextile, clay layer and geomembrane and a deep trench which is 2 meter down to the base of the 5th tomb at the north of the tomv site. Decrease and balancing of soil weight above the tomb are also needed for the sfety of tomb structures. For the algae living inside tombs, we recommend to spray K101 which developed in this study on the surface of wall and then, exposure to ultraviolet light sources for 24 hours. Air controlling system should be changed to a constant temperature and humidity system for the 6th tomb and the 7th tomb. It seems to much better to place the system at frontal room and to ciculate cold air inside tombs to solve dew problem. Above mentioned preservation methods are suggested to give least changes to tomb site and to solve the most fundmental problems. Repairing should be planned in order and some special cares are needed for the safety of tombs in reparing work. Finally, a monitoring system measuring tilting of tomb walls, water content, groundwater level, temperature and humidity is required to monitor and to evaluate the repairing work.

  • PDF