• Title/Summary/Keyword: Cost models

Search Result 1,967, Processing Time 0.028 seconds

Revisiting the trilemma of modern welfare states - Application of the fuzzy-set ideal type analysis - (복지국가 트릴레마 양상의 변화 - 퍼지셋 이상형 분석의 적용 -)

  • Shin, Dong-Myeon;Choi, Young Jun
    • 한국사회정책
    • /
    • v.19 no.3
    • /
    • pp.119-147
    • /
    • 2012
  • This paper aims to explore whether the trilemma of welfare states has been a valid argument about the recent change of welfare states. Based on fuzzy-set ideal type analysis of data from seventeen OECD countries, it examines that welfare states have achieved three core policy objectives -income equality, employment growth and fiscal discipline- in the service economy during the period between 1981 and 2010. The evidence presented in this paper does not support the trilemma of the service economy where only two goals can be pursued successfully at one time, at a cost of the other remained goal. The trilemma has been effective only to the countries in liberal welfare regime where employment growth and fiscal discipline has been achieved at a cost of higher levels of income equality. However, conservative welfare-state regimes have experienced the deterioration of income equality and fiscal restraint after the mid 1980s and it seems that they have diverged into various models. In the countries of the social democratic welfare regime, the goals of equality and employment have been achieved simultaneously together with fiscal discipline since the early 2000s. While they are classified as the perfect model in the research, Southern European welfare states including Greece and Italy, classified as 'the crisis model', have not performed well in all the three aspects. On the evidence presented in this paper, it can be said that the trilemma of welfare states in the service economy is not effective to explain the policy goals of welfare state as well as the result of redistributive politics in the service economy.

Building battery deterioration prediction model using real field data (머신러닝 기법을 이용한 납축전지 열화 예측 모델 개발)

  • Choi, Keunho;Kim, Gunwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.243-264
    • /
    • 2018
  • Although the worldwide battery market is recently spurring the development of lithium secondary battery, lead acid batteries (rechargeable batteries) which have good-performance and can be reused are consumed in a wide range of industry fields. However, lead-acid batteries have a serious problem in that deterioration of a battery makes progress quickly in the presence of that degradation of only one cell among several cells which is packed in a battery begins. To overcome this problem, previous researches have attempted to identify the mechanism of deterioration of a battery in many ways. However, most of previous researches have used data obtained in a laboratory to analyze the mechanism of deterioration of a battery but not used data obtained in a real world. The usage of real data can increase the feasibility and the applicability of the findings of a research. Therefore, this study aims to develop a model which predicts the battery deterioration using data obtained in real world. To this end, we collected data which presents change of battery state by attaching sensors enabling to monitor the battery condition in real time to dozens of golf carts operated in the real golf field. As a result, total 16,883 samples were obtained. And then, we developed a model which predicts a precursor phenomenon representing deterioration of a battery by analyzing the data collected from the sensors using machine learning techniques. As initial independent variables, we used 1) inbound time of a cart, 2) outbound time of a cart, 3) duration(from outbound time to charge time), 4) charge amount, 5) used amount, 6) charge efficiency, 7) lowest temperature of battery cell 1 to 6, 8) lowest voltage of battery cell 1 to 6, 9) highest voltage of battery cell 1 to 6, 10) voltage of battery cell 1 to 6 at the beginning of operation, 11) voltage of battery cell 1 to 6 at the end of charge, 12) used amount of battery cell 1 to 6 during operation, 13) used amount of battery during operation(Max-Min), 14) duration of battery use, and 15) highest current during operation. Since the values of the independent variables, lowest temperature of battery cell 1 to 6, lowest voltage of battery cell 1 to 6, highest voltage of battery cell 1 to 6, voltage of battery cell 1 to 6 at the beginning of operation, voltage of battery cell 1 to 6 at the end of charge, and used amount of battery cell 1 to 6 during operation are similar to that of each battery cell, we conducted principal component analysis using verimax orthogonal rotation in order to mitigate the multiple collinearity problem. According to the results, we made new variables by averaging the values of independent variables clustered together, and used them as final independent variables instead of origin variables, thereby reducing the dimension. We used decision tree, logistic regression, Bayesian network as algorithms for building prediction models. And also, we built prediction models using the bagging of each of them, the boosting of each of them, and RandomForest. Experimental results show that the prediction model using the bagging of decision tree yields the best accuracy of 89.3923%. This study has some limitations in that the additional variables which affect the deterioration of battery such as weather (temperature, humidity) and driving habits, did not considered, therefore, we would like to consider the them in the future research. However, the battery deterioration prediction model proposed in the present study is expected to enable effective and efficient management of battery used in the real filed by dramatically and to reduce the cost caused by not detecting battery deterioration accordingly.

Rectal Temperature Maintenance Using a Heat Exchanger of Cardioplegic System in Cardiopulmonary Bypass Model for Rats (쥐 심폐바이패스 모델에서 심정지액 주입용 열교환기를 이용한 직장체온 유지)

  • Choi Se-Hoon;Kim Hwa-Ryong;Paik In-Hyuck;Moon Hyun-Jong;Kim Won-Gon
    • Journal of Chest Surgery
    • /
    • v.39 no.7 s.264
    • /
    • pp.505-510
    • /
    • 2006
  • Background: Small animal cardiopulmonary bypass (CPB) model would be a valuable tool for investigating path-ophysiological and therapeutic strategies on bypass. The main advantages of a small animal model include the reduced cost and time, and the fact that it does not require a full scale operating environment. However the rat CPB models have a number of technical limitations. Effective maintenance and control of core temperature by a heat exchanger is among them. The purpose of this study is to confirm the effect of rectal temperature maintenance using a heat exchanger of cardioplegia system in cardiopulmonary bypass model for rats. Material and Method: The miniature circuit consisted of a reservoir, heat exchanger, membrane oxygenator, roller pump, and static priming volume was 40 cc, Ten male Sprague-Dawley rats (mean weight 530 gram) were divided into two groups, and heat exchanger (HE) group was subjected to CPB with HE from a cardioplegia system, and control group was subjected to CPB with warm water circulating around the reservoir. Partial CPB was conducted at a flow rate of 40 mg/kg/min for 20 min after venous cannulation (via the internal juglar vein) and arterial cannulation (via the femoral artery). Rectal temperature were measured after anesthetic induction, a ter cannulation, 5, 10, 15, 20 min after CPB. Arterial blood gas with hematocrit was also analysed, 5 and 15 min after CPB. Result: Rectal temperature change differed between the two groups (p<0.01). The temperatures of HE group were well maintained during CPB, whereas control group was under progressive hypothermia, Rectal temperature 20 min after CPB was $36.16{\pm}0.32^{\circ}C$ in the HE group and $34.22{\pm}0.36^{\circ}C$ in the control group. Conclusion: We confirmed the effect of rectal temperature maintenance using a heat exchanger of cardioplegia system in cardiopulmonary bypass model for rats. This model would be a valuable tool for further use in hypothermic CPB experiment in rats.

Forecasting Hourly Demand of City Gas in Korea (국내 도시가스의 시간대별 수요 예측)

  • Han, Jung-Hee;Lee, Geun-Cheol
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.17 no.2
    • /
    • pp.87-95
    • /
    • 2016
  • This study examined the characteristics of the hourly demand of city gas in Korea and proposed multiple regression models to obtain precise estimates of the hourly demand of city gas. Forecasting the hourly demand of city gas with accuracy is essential in terms of safety and cost. If underestimated, the pipeline pressure needs to be increased sharply to meet the demand, when safety matters. In the opposite case, unnecessary inventory and operation costs are incurred. Data analysis showed that the hourly demand of city gas has a very high autocorrelation and that the 24-hour demand pattern of a day follows the previous 24-hour demand pattern of the same day. That is, there is a weekly cycle pattern. In addition, some conditions that temperature affects the hourly demand level were found. That is, the absolute value of the correlation coefficient between the hourly demand and temperature is about 0.853 on average, while the absolute value of the correlation coefficient on a specific day improves to 0.861 at worst and 0.965 at best. Based on this analysis, this paper proposes a multiple regression model incorporating the hourly demand ahead of 24 hours and the hourly demand ahead of 168 hours, and another multiple regression model with temperature as an additional independent variable. To show the performance of the proposed models, computational experiments were carried out using real data of the domestic city gas demand from 2009 to 2013. The test results showed that the first regression model exhibits a forecasting accuracy of MAPE (Mean Absolute Percentage Error) around 4.5% over the past five years from 2009 to 2013, while the second regression model exhibits 5.13% of MAPE for the same period.

Multi-task Learning Based Tropical Cyclone Intensity Monitoring and Forecasting through Fusion of Geostationary Satellite Data and Numerical Forecasting Model Output (정지궤도 기상위성 및 수치예보모델 융합을 통한 Multi-task Learning 기반 태풍 강도 실시간 추정 및 예측)

  • Lee, Juhyun;Yoo, Cheolhee;Im, Jungho;Shin, Yeji;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_3
    • /
    • pp.1037-1051
    • /
    • 2020
  • The accurate monitoring and forecasting of the intensity of tropical cyclones (TCs) are able to effectively reduce the overall costs of disaster management. In this study, we proposed a multi-task learning (MTL) based deep learning model for real-time TC intensity estimation and forecasting with the lead time of 6-12 hours following the event, based on the fusion of geostationary satellite images and numerical forecast model output. A total of 142 TCs which developed in the Northwest Pacific from 2011 to 2016 were used in this study. The Communications system, the Ocean and Meteorological Satellite (COMS) Meteorological Imager (MI) data were used to extract the images of typhoons, and the Climate Forecast System version 2 (CFSv2) provided by the National Center of Environmental Prediction (NCEP) was employed to extract air and ocean forecasting data. This study suggested two schemes with different input variables to the MTL models. Scheme 1 used only satellite-based input data while scheme 2 used both satellite images and numerical forecast modeling. As a result of real-time TC intensity estimation, Both schemes exhibited similar performance. For TC intensity forecasting with the lead time of 6 and 12 hours, scheme 2 improved the performance by 13% and 16%, respectively, in terms of the root mean squared error (RMSE) when compared to scheme 1. Relative root mean squared errors(rRMSE) for most intensity levels were lessthan 30%. The lower mean absolute error (MAE) and RMSE were found for the lower intensity levels of TCs. In the test results of the typhoon HALONG in 2014, scheme 1 tended to overestimate the intensity by about 20 kts at the early development stage. Scheme 2 slightly reduced the error, resulting in an overestimation by about 5 kts. The MTL models reduced the computational cost about 300% when compared to the single-tasking model, which suggested the feasibility of the rapid production of TC intensity forecasts.

Effects of Private Insurance on Medical Expenditure (민간의료보험 가입이 의료이용에 미치는 영향)

  • Yun, Hee Suk
    • KDI Journal of Economic Policy
    • /
    • v.30 no.2
    • /
    • pp.99-128
    • /
    • 2008
  • Nearly all Koreans are insured through National Health Insurance(NHI). While NHI coverage is nearly universal, it is not complete. Coverage is largely limited to minimal level of hospital and physician expenses, and copayments are required in each case. As a result, Korea's public insurance system covers roughly 50% of overall individual health expenditures, and the remaining 50% consists of copayments for basic services, spending on services that are either not covered or poorly covered by the public system. In response to these gaps in the public system, 64% of the Korean population has supplemental private health insurance. Expansion of private health insurance raises negative externality issue. Like public financing schemes in other countries, the Korean system imposes cost-sharing on patients as a strategy for controlling utilization. Because most insurance policies reimburse patients for their out-of-pocket payments, supplemental insurance is likely to negate the impact of the policy, raising both total and public sector health spending. So far, most empirical analysis of supplemental health insurance to date has focused on the US Medigap programme. It is found that those with supplements apparently consume more health care. Two reasons for higher health care consumption by those with supplements suggest themselves. One is the moral hazard effect: by eliminating copayments and deductibles, supplements reduce the marginal price of care and induce additional consumption. The other explanation is that supplements are purchased by those who anticipate high health expenditures - adverse effect. The main issue addressed has been the separation of the moral hazard effect from the adverse selection one. The general conclusion is that the evidence on adverse selection based on observable variables is mixed. This article investigates the extent to which private supplementary insurance affect use of health care services by public health insurance enrollees, using Korean administrative data and private supplements related data collected through all relevant private insurance companies. I applied a multivariate two-part model to analyze the effects of various types of supplements on the likelihood and level of public health insurance spending and estimated marginal effects of supplements. Separate models were estimated for inpatients and outpatients in public insurance spending. The first part of the model estimated the likelihood of positive spending using probit regression, and the second part estimated the log of spending for those with positive spending. Use of a detailed information of individuals' public health insurance from administration data and of private insurance status from insurance companies made it possible to control for health status, the types of supplemental insurance owned by theses individuals, and other factors that explain spending variations across supplemental insurance categories in isolating the effects of supplemental insurance. Data from 2004 to 2006 were used, and this study found that private insurance increased the probability of a physician visit by less than 1 percent and a hospital admission by about 1 percent. However, supplemental insurance was not found to be associated with a bigger health care service utilization. Two-part models of health care utilization and expenditures showed that those without supplemental insurance had higher inpatient and outpatient expenditures than those with supplements, even after controlling for observable differences.

  • PDF

Analysis of Overseas LNG Bunkering Business Model (해외 LNG벙커링 비즈니스 모델 분석)

  • Kim, Ki-Dong;Park, So-Jin;Choi, Kyoung-Sik;Cho, Byung-Hak;Oh, Yong-Sam;Cho, Sang-Hoon;Cha, Keunng-Jong;Cho, Won-Jun;Seong, Hong-Gun
    • Journal of the Korean Institute of Gas
    • /
    • v.22 no.1
    • /
    • pp.37-44
    • /
    • 2018
  • As the international Maritime Organization is tightening up the emission regulation vessel, many countries and companies are pushing ahead the LNG fuel as one of long term solution for emission problems of ship. as a study on the way to conduct business for LNG bunkering around the world, this study was analyzed in view-point of business models focused on major countries such as Japan, China, Singapore, Europe and United States. The results of this study are as follows. China first established a nation-centered LNG bunkering policy. And then, the state and the energy company have been cooperating and carrying on LNG bunkering business for LNG fueled ships. Some countries in Europe and United States are in the process of LNG bunkering business mainly with private company. To obtain cheaper LNG fuel than bunker-C, the private company has a business model of LNG bunkering on their own LNG fueled ships, while securing LNG with high price competitiveness through partnership with middle class operators such us LNG terminal and natural gas liquefaction plant. Also, the LNG bunkering business around the world is focused on private companies rather than public corporations, but it was going to be focused on large energy companies because the initial cost required to build LNG bunkering infrastructure. Three models (TOTE model, Shell model, ENGIE model) of LNG bun kering business are currently being developed. It has been found that the way in which LNG bunkering business is implemented by different countries is applied differently according to the enterprise and national policy.

Selective Word Embedding for Sentence Classification by Considering Information Gain and Word Similarity (문장 분류를 위한 정보 이득 및 유사도에 따른 단어 제거와 선택적 단어 임베딩 방안)

  • Lee, Min Seok;Yang, Seok Woo;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.105-122
    • /
    • 2019
  • Dimensionality reduction is one of the methods to handle big data in text mining. For dimensionality reduction, we should consider the density of data, which has a significant influence on the performance of sentence classification. It requires lots of computations for data of higher dimensions. Eventually, it can cause lots of computational cost and overfitting in the model. Thus, the dimension reduction process is necessary to improve the performance of the model. Diverse methods have been proposed from only lessening the noise of data like misspelling or informal text to including semantic and syntactic information. On top of it, the expression and selection of the text features have impacts on the performance of the classifier for sentence classification, which is one of the fields of Natural Language Processing. The common goal of dimension reduction is to find latent space that is representative of raw data from observation space. Existing methods utilize various algorithms for dimensionality reduction, such as feature extraction and feature selection. In addition to these algorithms, word embeddings, learning low-dimensional vector space representations of words, that can capture semantic and syntactic information from data are also utilized. For improving performance, recent studies have suggested methods that the word dictionary is modified according to the positive and negative score of pre-defined words. The basic idea of this study is that similar words have similar vector representations. Once the feature selection algorithm selects the words that are not important, we thought the words that are similar to the selected words also have no impacts on sentence classification. This study proposes two ways to achieve more accurate classification that conduct selective word elimination under specific regulations and construct word embedding based on Word2Vec embedding. To select words having low importance from the text, we use information gain algorithm to measure the importance and cosine similarity to search for similar words. First, we eliminate words that have comparatively low information gain values from the raw text and form word embedding. Second, we select words additionally that are similar to the words that have a low level of information gain values and make word embedding. In the end, these filtered text and word embedding apply to the deep learning models; Convolutional Neural Network and Attention-Based Bidirectional LSTM. This study uses customer reviews on Kindle in Amazon.com, IMDB, and Yelp as datasets, and classify each data using the deep learning models. The reviews got more than five helpful votes, and the ratio of helpful votes was over 70% classified as helpful reviews. Also, Yelp only shows the number of helpful votes. We extracted 100,000 reviews which got more than five helpful votes using a random sampling method among 750,000 reviews. The minimal preprocessing was executed to each dataset, such as removing numbers and special characters from text data. To evaluate the proposed methods, we compared the performances of Word2Vec and GloVe word embeddings, which used all the words. We showed that one of the proposed methods is better than the embeddings with all the words. By removing unimportant words, we can get better performance. However, if we removed too many words, it showed that the performance was lowered. For future research, it is required to consider diverse ways of preprocessing and the in-depth analysis for the co-occurrence of words to measure similarity values among words. Also, we only applied the proposed method with Word2Vec. Other embedding methods such as GloVe, fastText, ELMo can be applied with the proposed methods, and it is possible to identify the possible combinations between word embedding methods and elimination methods.

The Business Model & Feasibility Analysis of the Han-Ok Residential Housing Block (한옥주거단지 사업모델구상 및 타당성 분석)

  • Choi, Sang-Hee;Song, Ki-Wook;Park, Sin-Won
    • Land and Housing Review
    • /
    • v.2 no.4
    • /
    • pp.453-461
    • /
    • 2011
  • This study is to derive a project model based on potential demand for Korean-style houses, focusing on new town detached housing sites that LH supplies and to test validity of the derived model and to present the direction and supply methods of the projects. The existing high-class new town Korean-style housing developments that have been considered were found to have little business value due to problems in choice of location and discordance of demand, so 6 types of projects were established through the methods of changes in planned scale, combined use, and subdivision of plot of land based on the results of survey. The type that has the highest business value among the project models was block-type multifamily houses, and this can be interpreted as the increase in total construction area leading to increase inrevenues of allotment sales due to economies of scale. The feasibility of mass housing model in which small-scale Korean-style houses are combined with amenities was found to be high, and if the same project conditions as those of the block-type multifamily houses are applied, the business value of the Korean-style tenement houses was found to be high. Besides, the high-class housing models within block-type detached housing areas are typical projects that the private sector generally promotes, and the construction cost was found to be most expensive with 910 million won per house. In order to enhance the business value of the Korean-style housing development, collectivization such as choice of location, diversification of demand classes, optimization of house sizes, and combination of uses is needed. And in order to adopt Korean-style houses in the detached housing sites, the adjustments and division of the existing planned plots are needed, and the strategies to cope with new demand through supplying Korean-style housing types of sites can be suggested. Also breaking away from the existing uniform residential development methods, the development method through supplying original land that is natural land not yet developed besides basic infrastructures (main roads and water and sewage) can be considered, and as the construction of more than 1~2 stories building is impossible due to the structure of Korean-style house roof and furniture. So it can be suggested that original land in the form of hilly land is considered to be most suitable to large-scale development projects.

Sorghum Panicle Detection using YOLOv5 based on RGB Image Acquired by UAV System (무인기로 취득한 RGB 영상과 YOLOv5를 이용한 수수 이삭 탐지)

  • Min-Jun, Park;Chan-Seok, Ryu;Ye-Seong, Kang;Hye-Young, Song;Hyun-Chan, Baek;Ki-Su, Park;Eun-Ri, Kim;Jin-Ki, Park;Si-Hyeong, Jang
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.24 no.4
    • /
    • pp.295-304
    • /
    • 2022
  • The purpose of this study is to detect the sorghum panicle using YOLOv5 based on RGB images acquired by a unmanned aerial vehicle (UAV) system. The high-resolution images acquired using the RGB camera mounted in the UAV on September 2, 2022 were split into 512×512 size for YOLOv5 analysis. Sorghum panicles were labeled as bounding boxes in the split image. 2,000images of 512×512 size were divided at a ratio of 6:2:2 and used to train, validate, and test the YOLOv5 model, respectively. When learning with YOLOv5s, which has the fewest parameters among YOLOv5 models, sorghum panicles were detected with mAP@50=0.845. In YOLOv5m with more parameters, sorghum panicles could be detected with mAP@50=0.844. Although the performance of the two models is similar, YOLOv5s ( 4 hours 35 minutes) has a faster training time than YOLOv5m (5 hours 15 minutes). Therefore, in terms of time cost, developing the YOLOv5s model was considered more efficient for detecting sorghum panicles. As an important step in predicting sorghum yield, a technique for detecting sorghum panicles using high-resolution RGB images and the YOLOv5 model was presented.