• 제목/요약/키워드: Cost approach

Search Result 3,245, Processing Time 0.032 seconds

Evaluation of the Measurement Uncertainty from the Standard Operating Procedures(SOP) of the National Environmental Specimen Bank (국가환경시료은행 생태계 대표시료의 채취 및 분석 표준운영절차에 대한 단계별 측정불확도 평가 연구)

  • Lee, Jongchun;Lee, Jangho;Park, Jong-Hyouk;Lee, Eugene;Shim, Kyuyoung;Kim, Taekyu;Han, Areum;Kim, Myungjin
    • Journal of Environmental Impact Assessment
    • /
    • v.24 no.6
    • /
    • pp.607-618
    • /
    • 2015
  • Five years have passed since the first set of environmental samples was taken in 2011 to represent various ecosystems which would help future generations lead back to the past environment. Those samples have been preserved cryogenically in the National Environmental Specimen Bank(NESB) at the National Institute of Environmental Research. Even though there is a strict regulation (SOP, standard operating procedure) that rules over the whole sampling procedure to ensure each sample to represent the sampling area, it has not been put to the test for the validation. The question needs to be answered to clear any doubts on the representativeness and the quality of the samples. In order to address the question and ensure the sampling practice set in the SOP, many steps to the measurement of the sample, that is, from sampling in the field and the chemical analysis in the lab are broken down to evaluate the uncertainty at each level. Of the 8 species currently taken for the cryogenic preservation in the NESB, pine tree samples from two different sites were selected for this study. Duplicate samples were taken from each site according to the sampling protocol followed by the duplicate analyses which were carried out for each discrete sample. The uncertainties were evaluated by Robust ANOVA; two levels of uncertainty, one is the uncertainty from the sampling practice, and the other from the analytical process, were then compiled to give the measurement uncertainty on a measured concentration of the measurand. As a result, it was confirmed that it is the sampling practice not the analytical process that accounts for the most of the measurement uncertainty. Based on the top-down approach for the measurement uncertainty, the efficient way to ensure the representativeness of the sample was to increase the quantity of each discrete sample for the making of a composite sample, than to increase the number of the discrete samples across the site. Furthermore, the cost-effective approach to enhance the confidence level on the measurement can be expected from the efforts to lower the sampling uncertainty, not the analytical uncertainty. To test the representativeness of a composite sample of a sampling area, the variance within the site should be less than the difference from duplicate sampling. For that, a criterion, ${i.e.s^2}_{geochem}$(across the site variance) <${s^2}_{samp}$(variance at the sampling location) was proposed. In light of the criterion, the two representative samples for the two study areas passed the requirement. In contrast, whenever the variance of among the sampling locations (i.e. across the site) is larger than the sampling variance, more sampling increments need to be added within the sampling area until the requirement for the representativeness is achieved.

A Study of Family Caregiver's Burden for the Terminally III Patients (지역사회 말기질환자 가족 부담감에 관한 연구)

  • Han, Sung-Suk;Ro, You-Ja;Yang, Soo;Yoo, Yang-Sook;Kim, Sek-Il;Hwang, Hee-Hyung
    • Journal of Home Health Care Nursing
    • /
    • v.10 no.1
    • /
    • pp.58-72
    • /
    • 2003
  • The purpose of this study was to describe the perceived burden of the terminally III patients's caregiver and to analyze relationship between the perceived burden and the various demographics, illness characteristics, family relationships, and economic factor of the family & patients. The sample of 132 caregivers who care for the terminally III patients Kyung-Gi province, Seoul, Korea. The period of this study was from August to September, 2002. The perceived burden of the family caregiver was measured by the burden scale(20 items, 4 point scale) developed by Montgomery et al. (1985). The Data was analyzed using SAS-program by t-test and ANOVA. The results were as follows; 1. The mean of the family caregiver's burden score was 3.02. The score showed that caregivers perceive severe the level of burden. The hight items of the family caregiver's burden were' I feel it is painful to watch patient's diseases'(3.77). 'I feel afraid for what the future holds for my patients'(3.66), 'I feel it reduced to amount of privacy time'(3.64). 2. The caregiver's burden was significantly related to patient's gender(F=3.17, p= 0.0020), patient's job(F=2.49, p=0.0476), caregiver's age(F=4.29, p=0.0030), and caregiver's job(F=2.49, p=0.0476). 3. The caregiver's burden according to illness characteristics showed no significant difference. 4. The caregiver's burden was significantly associated with patient's family relationship (F=4.05, p=0.0041), patient's care mean period in a day(F=47.18,

  • PDF

Corporate Bond Rating Using Various Multiclass Support Vector Machines (다양한 다분류 SVM을 적용한 기업채권평가)

  • Ahn, Hyun-Chul;Kim, Kyoung-Jae
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.157-178
    • /
    • 2009
  • Corporate credit rating is a very important factor in the market for corporate debt. Information concerning corporate operations is often disseminated to market participants through the changes in credit ratings that are published by professional rating agencies, such as Standard and Poor's (S&P) and Moody's Investor Service. Since these agencies generally require a large fee for the service, and the periodically provided ratings sometimes do not reflect the default risk of the company at the time, it may be advantageous for bond-market participants to be able to classify credit ratings before the agencies actually publish them. As a result, it is very important for companies (especially, financial companies) to develop a proper model of credit rating. From a technical perspective, the credit rating constitutes a typical, multiclass, classification problem because rating agencies generally have ten or more categories of ratings. For example, S&P's ratings range from AAA for the highest-quality bonds to D for the lowest-quality bonds. The professional rating agencies emphasize the importance of analysts' subjective judgments in the determination of credit ratings. However, in practice, a mathematical model that uses the financial variables of companies plays an important role in determining credit ratings, since it is convenient to apply and cost efficient. These financial variables include the ratios that represent a company's leverage status, liquidity status, and profitability status. Several statistical and artificial intelligence (AI) techniques have been applied as tools for predicting credit ratings. Among them, artificial neural networks are most prevalent in the area of finance because of their broad applicability to many business problems and their preeminent ability to adapt. However, artificial neural networks also have many defects, including the difficulty in determining the values of the control parameters and the number of processing elements in the layer as well as the risk of over-fitting. Of late, because of their robustness and high accuracy, support vector machines (SVMs) have become popular as a solution for problems with generating accurate prediction. An SVM's solution may be globally optimal because SVMs seek to minimize structural risk. On the other hand, artificial neural network models may tend to find locally optimal solutions because they seek to minimize empirical risk. In addition, no parameters need to be tuned in SVMs, barring the upper bound for non-separable cases in linear SVMs. Since SVMs were originally devised for binary classification, however they are not intrinsically geared for multiclass classifications as in credit ratings. Thus, researchers have tried to extend the original SVM to multiclass classification. Hitherto, a variety of techniques to extend standard SVMs to multiclass SVMs (MSVMs) has been proposed in the literature Only a few types of MSVM are, however, tested using prior studies that apply MSVMs to credit ratings studies. In this study, we examined six different techniques of MSVMs: (1) One-Against-One, (2) One-Against-AIL (3) DAGSVM, (4) ECOC, (5) Method of Weston and Watkins, and (6) Method of Crammer and Singer. In addition, we examined the prediction accuracy of some modified version of conventional MSVM techniques. To find the most appropriate technique of MSVMs for corporate bond rating, we applied all the techniques of MSVMs to a real-world case of credit rating in Korea. The best application is in corporate bond rating, which is the most frequently studied area of credit rating for specific debt issues or other financial obligations. For our study the research data were collected from National Information and Credit Evaluation, Inc., a major bond-rating company in Korea. The data set is comprised of the bond-ratings for the year 2002 and various financial variables for 1,295 companies from the manufacturing industry in Korea. We compared the results of these techniques with one another, and with those of traditional methods for credit ratings, such as multiple discriminant analysis (MDA), multinomial logistic regression (MLOGIT), and artificial neural networks (ANNs). As a result, we found that DAGSVM with an ordered list was the best approach for the prediction of bond rating. In addition, we found that the modified version of ECOC approach can yield higher prediction accuracy for the cases showing clear patterns.

Personal Information Overload and User Resistance in the Big Data Age (빅데이터 시대의 개인정보 과잉이 사용자 저항에 미치는 영향)

  • Lee, Hwansoo;Lim, Dongwon;Zo, Hangjung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.1
    • /
    • pp.125-139
    • /
    • 2013
  • Big data refers to the data that cannot be processes with conventional contemporary data technologies. As smart devices and social network services produces vast amount of data, big data attracts much attention from researchers. There are strong demands form governments and industries for bib data as it can create new values by drawing business insights from data. Since various new technologies to process big data introduced, academic communities also show much interest to the big data domain. A notable advance related to the big data technology has been in various fields. Big data technology makes it possible to access, collect, and save individual's personal data. These technologies enable the analysis of huge amounts of data with lower cost and less time, which is impossible to achieve with traditional methods. It even detects personal information that people do not want to open. Therefore, people using information technology such as the Internet or online services have some level of privacy concerns, and such feelings can hinder continued use of information systems. For example, SNS offers various benefits, but users are sometimes highly exposed to privacy intrusions because they write too much personal information on it. Even though users post their personal information on the Internet by themselves, the data sometimes is not under control of the users. Once the private data is posed on the Internet, it can be transferred to anywhere by a few clicks, and can be abused to create fake identity. In this way, privacy intrusion happens. This study aims to investigate how perceived personal information overload in SNS affects user's risk perception and information privacy concerns. Also, it examines the relationship between the concerns and user resistance behavior. A survey approach and structural equation modeling method are employed for data collection and analysis. This study contributes meaningful insights for academic researchers and policy makers who are planning to develop guidelines for privacy protection. The study shows that information overload on the social network services can bring the significant increase of users' perceived level of privacy risks. In turn, the perceived privacy risks leads to the increased level of privacy concerns. IF privacy concerns increase, it can affect users to from a negative or resistant attitude toward system use. The resistance attitude may lead users to discontinue the use of social network services. Furthermore, information overload is mediated by perceived risks to affect privacy concerns rather than has direct influence on perceived risk. It implies that resistance to the system use can be diminished by reducing perceived risks of users. Given that users' resistant behavior become salient when they have high privacy concerns, the measures to alleviate users' privacy concerns should be conceived. This study makes academic contribution of integrating traditional information overload theory and user resistance theory to investigate perceived privacy concerns in current IS contexts. There is little big data research which examined the technology with empirical and behavioral approach, as the research topic has just emerged. It also makes practical contributions. Information overload connects to the increased level of perceived privacy risks, and discontinued use of the information system. To keep users from departing the system, organizations should develop a system in which private data is controlled and managed with ease. This study suggests that actions to lower the level of perceived risks and privacy concerns should be taken for information systems continuance.

A Study on the Decision Factors for AI-based SaMD Adoption Using Delphi Surveys and AHP Analysis (델파이 조사와 AHP 분석을 활용한 인공지능 기반 SaMD 도입 의사결정 요인에 관한 연구)

  • Byung-Oh Woo;Jay In Oh
    • The Journal of Bigdata
    • /
    • v.8 no.1
    • /
    • pp.111-129
    • /
    • 2023
  • With the diffusion of digital innovation, the adoption of innovative medical technologies based on artificial intelligence is increasing in the medical field. This is driving the launch and adoption of AI-based SaMD(Software as a Medical Device), but there is a lack of research on the factors that influence the adoption of SaMD by medical institutions. The purpose of this study is to identify key factors that influence medical institutions' decisions to adopt AI-based SaMDs, and to analyze the weights and priorities of these factors. For this purpose, we conducted Delphi surveys based on the results of literature studies on technology acceptance models in healthcare industry, medical AI and SaMD, and developed a research model by combining HOTE(Human, Organization, Technology and Environment) framework and HABIO(Holistic Approach {Business, Information, Organizational}) framework. Based on the research model with 5 main criteria and 22 sub-criteria, we conducted an AHP(Analytical Hierarchy Process) analysis among the experts from domestic medical institutions and SaMD providers to empirically analyze SaMD adoption factors. The results of this study showed that the priority of the main criteria for determining the adoption of AI-based SaMD was in the order of technical factors, economic factors, human factors, organizational factors, and environmental factors. The priority of sub-criteria was in the order of reliability, cost reduction, medical staff's acceptance, safety, top management's support, security, and licensing & regulatory levels. Specifically, technical factors such as reliability, safety, and security were found to be the most important factors for SaMD adoption. In addition, the comparisons and analyses of the weights and priorities of each group showed that the weights and priorities of SaMD adoption factors varied by type of institution, type of medical institution, and type of job in the medical institution.

Modeling Brand Equity for Lifestyle Brand Extensions: A Strategic Approach into Generation Y vs. Baby Boomer (생활방식품패확장적품패자산건모(生活方式品牌扩张的品牌资产建模): 침대Y세대화영인조소비자적전략로경(针对Y世代和婴儿潮消费者的战略路径))

  • Kim, Eun-Young;Brandon, Lynn
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.1
    • /
    • pp.35-48
    • /
    • 2010
  • Today, the fashion market challenged by a maturing retail market needs a new paradigm in the "evolution of brand" to improve their comparative advantages. An important issue in fashion marketing is lifestyle brand extension with a specific aim to meet consumers' specific needs for their changing lifestyle. For fashion brand extensions into lifestyle product categories, Gen Y and Baby Boomer are emerging as "prospects"-Baby Boomers who are renovating their lifestyle, and generation Y experiencing changes in their life stage-with demands for buying new products. Therefore, it is imperative that apparel companies pay special attention to the consumer cohort for brand extension to create and manage their brand equity in a new product category. The purposes of this study are to (a) evaluate brand equity between parent and extension brands; (b) identify consumers' perceived marketing elements for brand extension; and (c) estimate a structural equation model for examining causative relationship between marketing elements and brand equity for brand extensions in lifestyle product category including home fashion items for the selected two groups (e.g., Gen Y, and Baby boomer). For theoretical frameworks, this study focused on the traditional marketing 4P's mix to identify what marketing element is more importantly related to brand extension equity for this study. It is assumed that comparable marketing capability can be critical to establish "brand extension equity", leads to successfully entering the new categories. Drawing from the relevant literature, this study developed research hypotheses incorporating brand equity factors and marketing elements by focusing on the selected consumers (e.g., Gen Y, Baby Boomer). In the context of brand extension in the lifestyle products, constructs of brand equity consist of brand awareness/association, brand perceptions (e.g., perceived quality, emotional value) and brand resonance adapted from CBBE factors (Keller, 2001). It is postulated that the marketing elements create brand extension equity in terms of brand awareness/association, brand perceptions by the brand extension into lifestyle products, which in turn influence brand resonance. For data collection, the sample was comprised of Korean female consumers in Gen Y and Baby Boomer consumer categories who have a high demand for lifestyle products due to changing their lifecycles. A total of 651 usable questionnaires were obtained from female consumers of Gen Y (n=326) and Baby Boomer (n=325) in South Korea. Structural and measurement models using a correlation matrix was estimated using LISREL 8.8. Findings indicated that perceived marketing elements for brand extension consisted of three factors: price/store image, product, and advertising. In the model of Gen Y consumers, price/store image had a positive effect on brand equity factors (e.g., brand awareness/association, perceived quality), while product had positive effect on emotional value in the brand extensions; and the brand awareness/association was likely to increase the perceived quality and emotional value, leading to brand resonance for brand extensions in the lifestyle products. In the model of Baby Boomer consumers, price/store image had a positive effect on perceived quality, which created brand resonance of brand extension; and product had a positive effect on perceived quality and emotional value, which leads to brand resonance for brand extension in the lifestyle products. However, advertising was negatively related to brand equity for both groups. This study provides an insight for fashion marketers in developing a successful brand extension strategy, leading to a sustainable competitive advantage. This study complements and extends prior works in the brand extension through critical factors of marketing efforts that affect brand extension success. Findings support a synergy effect on leveraging of fashion brand extensions (Aaker and Keller, 1990; Tauber, 1988; Shine et al., 2007; Pitta and Katsanis, 1995) in conjunction with marketing actions for entering into the new product category. Thus, it is recommended that marketers targeting both Gen Y and Baby Boomer can reduce marketing cost for entering the new product category (e.g., home furnishings) by standardized marketing efforts; fashion marketers can (a) offer extension lines with premium ranges of price; (b) place an emphasis on upscale features of store image positioning by a retail channel (e.g., specialty department store) in Korea, and (c) combine apparel with lifestyle product assortments including innovative style and designer’s limited editions. With respect to brand equity, a key to successful brand extension is consumers’ brand awareness or association that ensures brand identity with new product category. It is imperative for marketers to have knowledge of what contributes to more concrete associations in a market entry into new product categories. For fashion brands, a second key of brand extension can be a "luxury" lifestyle approach into new product categories, in that higher price or store image had impact on perceived quality that established brand resonance. More importantly, this study increases the theoretical understanding of brand extension and suggests directions for marketers as they establish marketing program at Gen Y and Baby Boomers.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Analysis on Factors Influencing Welfare Spending of Local Authority : Implementing the Detailed Data Extracted from the Social Security Information System (지방자치단체 자체 복지사업 지출 영향요인 분석 : 사회보장정보시스템을 통한 접근)

  • Kim, Kyoung-June;Ham, Young-Jin;Lee, Ki-Dong
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.141-156
    • /
    • 2013
  • Researchers in welfare services of local government in Korea have rather been on isolated issues as disables, childcare, aging phenomenon, etc. (Kang, 2004; Jung et al., 2009). Lately, local officials, yet, realize that they need more comprehensive welfare services for all residents, not just for above-mentioned focused groups. Still cases dealt with focused group approach have been a main research stream due to various reason(Jung et al., 2009; Lee, 2009; Jang, 2011). Social Security Information System is an information system that comprehensively manages 292 welfare benefits provided by 17 ministries and 40 thousand welfare services provided by 230 local authorities in Korea. The purpose of the system is to improve efficiency of social welfare delivery process. The study of local government expenditure has been on the rise over the last few decades after the restarting the local autonomy, but these studies have limitations on data collection. Measurement of a local government's welfare efforts(spending) has been primarily on expenditures or budget for an individual, set aside for welfare. This practice of using monetary value for an individual as a "proxy value" for welfare effort(spending) is based on the assumption that expenditure is directly linked to welfare efforts(Lee et al., 2007). This expenditure/budget approach commonly uses total welfare amount or percentage figure as dependent variables (Wildavsky, 1985; Lee et al., 2007; Kang, 2000). However, current practice of using actual amount being used or percentage figure as a dependent variable may have some limitation; since budget or expenditure is greatly influenced by the total budget of a local government, relying on such monetary value may create inflate or deflate the true "welfare effort" (Jang, 2012). In addition, government budget usually contain a large amount of administrative cost, i.e., salary, for local officials, which is highly unrelated to the actual welfare expenditure (Jang, 2011). This paper used local government welfare service data from the detailed data sets linked to the Social Security Information System. The purpose of this paper is to analyze the factors that affect social welfare spending of 230 local authorities in 2012. The paper applied multiple regression based model to analyze the pooled financial data from the system. Based on the regression analysis, the following factors affecting self-funded welfare spending were identified. In our research model, we use the welfare budget/total budget(%) of a local government as a true measurement for a local government's welfare effort(spending). Doing so, we exclude central government subsidies or support being used for local welfare service. It is because central government welfare support does not truly reflect the welfare efforts(spending) of a local. The dependent variable of this paper is the volume of the welfare spending and the independent variables of the model are comprised of three categories, in terms of socio-demographic perspectives, the local economy and the financial capacity of local government. This paper categorized local authorities into 3 groups, districts, and cities and suburb areas. The model used a dummy variable as the control variable (local political factor). This paper demonstrated that the volume of the welfare spending for the welfare services is commonly influenced by the ratio of welfare budget to total local budget, the population of infants, self-reliance ratio and the level of unemployment factor. Interestingly, the influential factors are different by the size of local government. Analysis of determinants of local government self-welfare spending, we found a significant effect of local Gov. Finance characteristic in degree of the local government's financial independence, financial independence rate, rate of social welfare budget, and regional economic in opening-to-application ratio, and sociology of population in rate of infants. The result means that local authorities should have differentiated welfare strategies according to their conditions and circumstances. There is a meaning that this paper has successfully proven the significant factors influencing welfare spending of local government in Korea.

PRC Maritime Operational Capability and the Task for the ROK Military (중국군의 해양작전능력과 한국군의 과제)

  • Kim, Min-Seok
    • Strategy21
    • /
    • s.33
    • /
    • pp.65-112
    • /
    • 2014
  • Recent trends show that the PRC has stepped aside its "army-centered approach" and placed greater emphasis on its Navy and Air Force for a wider range of operations, thereby reducing its ground force and harnessing its economic power and military technology into naval development. A quantitative growth of the PLA Navy itself is no surprise as this is not a recent phenomenon. Now is the time to pay closer attention to the level of PRC naval force's performance and the extent of its warfighting capacity in the maritime domain. It is also worth asking what China can do with its widening naval power foundation. In short, it is time to delve into several possible scenarios I which the PRC poses a real threat. With this in mind, in Section Two the paper seeks to observe the construction progress of PRC's naval power and its future prospects up to the year 2020, and categorize time frame according to its major force improvement trends. By analyzing qualitative improvements made over time, such as the scale of investment and the number of ships compared to increase in displacement (tonnage), this paper attempts to identify salient features in the construction of naval power. Chapter Three sets out performance evaluation on each type of PRC naval ships as well as capabilities of the Navy, Air Force, the Second Artillery (i.e., strategic missile forces) and satellites that could support maritime warfare. Finall, the concluding chapter estimates the PRC's maritime warfighting capability as anticipated in respective conflict scenarios, and considers its impact on the Korean Peninsula and proposes the directions ROK should steer in response. First of all, since the 1980s the PRC navy has undergone transitions as the focus of its military strategic outlook shifted from ground warfare to maritime warfare, and within 30 years of its effort to construct naval power while greatly reducing the size of its ground forces, the PRC has succeeded in building its naval power next to the U.S.'s in the world in terms of number, with acquisition of an aircraft carrier, Chinese-version of the Aegis, submarines and so on. The PRC also enjoys great potentials to qualitatively develop its forces such as indigenous aircraft carriers, next-generation strategic submarines, next-generation destroyers and so forth, which is possible because the PRC has accumulated its independent production capabilities in the process of its 30-year-long efforts. Secondly, one could argue that ROK still has its chances of coping with the PRC in naval power since, despite its continuous efforts, many estimate that the PRC naval force is roughly ten or more years behind that of superpowers such as the U.S., on areas including radar detection capability, EW capability, C4I and data-link systems, doctrines on force employment as well as tactics, and such gap cannot be easily overcome. The most probable scenarios involving the PRC in sea areas surrounding the Korean Peninsula are: first, upon the outbreak of war in the peninsula, the PRC may pursue military intervention through sea, thereby undermining efforts of the ROK-U.S. combined operations; second, ROK-PRC or PRC-Japan conflicts over maritime jurisdiction or ownership over the Senkaku/Diaoyu islands could inflict damage to ROK territorial sovereignty or economic gains. The PRC would likely attempt to resolve the conflict employing blitzkrieg tactics before U.S. forces arrive on the scene, while at the same time delaying and denying access of the incoming U.S. forces. If this proves unattainable, the PRC could take a course of action adopting "long-term attrition warfare," thus weakening its enemy's sustainability. All in all, thiss paper makes three proposals on how the ROK should respond. First, modern warfare as well as the emergent future warfare demonstrates that the center stage of battle is no longer the domestic territory, but rather further away into the sea and space. In this respect, the ROKN should take advantage of the distinct feature of battle space on the peninsula, which is surrounded by the seas, and obtain capabilities to intercept more than 50 percent of the enemy's ballistic missiles, including those of North Korea. In tandem with this capacity, employment of a large scale of UAV/F Carrier for Kill Chain operations should enhance effectiveness. This is because conditions are more favorable to defend from sea, on matters concerning accuracy rates against enemy targets, minimized threat of friendly damage, and cost effectiveness. Second, to maintain readiness for a North Korean crisis where timely deployment of US forces is not possible, the ROKN ought to obtain capabilities to hold the enemy attack at bay while deterring PRC naval intervention. It is also argued that ROKN should strengthen its power so as to protect national interests in the seas surrounding the peninsula without support from the USN, should ROK-PRC or ROK-Japan conflict arise concerning maritime jurisprudence. Third, the ROK should fortify infrastructures for independent construction of naval power and expand its R&D efforts, and for this purpose, the ROK should make the most of the advantages stemming from the ROK-U.S. alliance inducing active support from the United States. The rationale behind this argument is that while it is strategically effective to rely on alliance or jump on the bandwagon, the ultimate goal is always to acquire an independent response capability as much as possible.