• Title/Summary/Keyword: integration analysis

Search Result 4,007, Processing Time 0.04 seconds

An Economic Factor Analysis of Air Pollutants Emission Using Index Decomposition Methods (대기오염 배출량 변화의 경제적 요인 분해)

  • Park, Dae Moon;Kim, Ki Heung
    • Environmental and Resource Economics Review
    • /
    • v.14 no.1
    • /
    • pp.167-199
    • /
    • 2005
  • The following policy implications can be drawn from this study: 1) The Air Pollution Emission Amount Report published by the Ministry of Environment since 1991 classifies industries into 4 sectors, i. e., heating, manufacturing, transportation and power generation. Currently, the usability of report is very low and extra efforts should be given to refine the current statistics and to improve the industrial classification. 2) Big pollution industries are as follows - s7, s17 and s20. The current air pollution control policy for these sectors compared to other sectors are found to be inefficient. This finding should be noted in the implementation of future air pollution policy. 3) s10 and s17 are found to be a big polluting industrial sector and its pollution reduction effect is also significant. 4) The effect of emission coefficient (${\Delta}f$) has the biggest impact on the reduction of emission amount change and the effect of economic growth coefficient (${\Delta}y$) has the biggest impact on the increase of emission volume. The effect of production technology factor (${\Delta}D$) and the effect of the change of the final demand structure (${\Delta}u$) are insignificant in terms of the change of emission volume. 5) Further studies on emission estimation techniques on each industry sector and the economic analysis are required to promote effective enforcement of the total volume control system of air pollutants, the differential management of pollution causing industrial sectors and the integration of environment and economy. 6) Korea's economic growth in 1990 is not pollution-driven in terms of the Barry Commoner's hypothesis, even though the overall industrial structure and the demand structure are not environmentally friendly. It indicates that environmental policies for the improvement of air quality depend mainly on the government initiatives and systematic national level consideration of industrial structures and the development of green technologies are not fully incorporated.

  • PDF

A Study on the Accessability of the Bikeway Networks in the South of the Han River Using Space Syntax (Space Syntax를 이용한 한강이남 자전거도로망의 접근성 분석)

  • Oh, Chung-Won;Lim, Dong-Wook;Kim, Hyun-Jin;Park, Jun-Tae
    • International Journal of Highway Engineering
    • /
    • v.14 no.3
    • /
    • pp.97-110
    • /
    • 2012
  • In case of the axial line bikeway, the residents in the riverside use it frequently but those in the city center hardly do as the axis is formed along the riverside of the Han river. This situation happens due to the inefficient connectivity between the bikeway that leads to the riverside and that in the city center, which means the administrative policies have been focused on the facilities rather than on raising connectivity of the bikeway networks in the south of the Han River. Also, this is the point of time when a reasonable improvement plan is needed to raise the accessibilities of the bikeways in Seoul, not just a utilization of those in the south of the Han River. This study intends to analyse the connectivity between Gangseo (west of the Han River) and Gangdong (East of the Han River) districts by comparing the current spatial structure in the southern areas of the Han River and that after the installation of the axial line, and that after installation of the ordinary bikeway respectively with use of the Space Syntax Model, and to predict resultant changes there of. The Space Syntax Model was used as the major method of analysis in this study to find out characteristics of the bikeways in the south of the Han River. Consequently, as the analysis shows that when the bikeways in the Gangseo and Gangdong-gu districts are connected, the accessibility of the bikeways in the south of the Han River will improve while the spatial structure intelligibility will be lowered, it can be said the prediction rate in the all areas is low compared with the current status. It is analyzed that the intelligibility of the current bikeways in the south of the Han River appeared highest when compared with those of plans for installation of the axial line and ordinary bikeway because they don't interconnect with other areas.

An Examination of Knowledge Sourcing Strategies Effects on Corporate Performance in Small Enterprises (소규모 기업에 있어서 지식소싱 전략이 기업성과에 미치는 영향 고찰)

  • Choi, Byoung-Gu
    • Asia pacific journal of information systems
    • /
    • v.18 no.4
    • /
    • pp.57-81
    • /
    • 2008
  • Knowledge is an essential strategic weapon for sustaining competitive advantage and is the key determinant for organizational growth. When knowledge is shared and disseminated throughout the organization, it increases an organization's value by providing the ability to respond to new and unusual situations. The growing importance of knowledge as a critical resource has forced executives to pay attention to their organizational knowledge. Organizations are increasingly undertaking knowledge management initiatives and making significant investments. Knowledge sourcing is considered as the first important step in effective knowledge management. Most firms continue to make an effort to realize the benefits of knowledge management by using various knowledge sources effectively. Appropriate knowledge sourcing strategies enable organizations to create, acquire, and access knowledge in a timely manner by reducing search and transfer costs, which result in better firm performance. In response, the knowledge management literature has devoted substantial attention to the analysis of knowledge sourcing strategies. Many studies have categorized knowledge sourcing strategies into intemal- and external-oriented. Internal-oriented sourcing strategy attempts to increase firm performance by integrating knowledge within the boundary of the firm. On the contrary, external-oriented strategy attempts to bring knowledge in from outside sources via either acquisition or imitation, and then to transfer that knowledge across to the organization. However, the extant literature on knowledge sourcing strategies focuses primarily on large organizations. Although many studies have clearly highlighted major differences between large and small firms and the need to adopt different strategies for different firm sizes, scant attention has been given to analyzing how knowledge sourcing strategies affect firm performance in small firms and what are the differences between small and large firms in the patterns of knowledge sourcing strategies adoption. This study attempts to advance the current literature by examining the impact of knowledge sourcing strategies on small firm performance from a holistic perspective. By drawing on knowledge based theory from organization science and complementarity theory from the economics literature, this paper is motivated by the following questions: (1) what are the adoption patterns of different knowledge sourcing strategies in small firms (i,e., what sourcing strategies should be adopted and which sourcing strategies work well together in small firms)?; and (2) what are the performance implications of these adoption patterns? In order to answer the questions, this study developed three hypotheses. First hypothesis based on knowledge based theory is that internal-oriented knowledge sourcing is positively associated with small firm performance. Second hypothesis developed on the basis of knowledge based theory is that external-oriented knowledge sourcing is positively associated with small firm performance. The third one based on complementarity theory is that pursuing both internal- and external-oriented knowledge sourcing simultaneously is negatively or less positively associated with small firm performance. As a sampling frame, 700 firms were identified from the Annual Corporation Report in Korea. Survey questionnaires were mailed to owners or executives who were most erudite about the firm s knowledge sourcing strategies and performance. A total of 188 companies replied, yielding a response rate of 26.8%. Due to incomplete data, 12 responses were eliminated, leaving 176 responses for the final analysis. Since all independent variables were measured using continuous variables, supermodularity function was used to test the hypotheses based on the cross partial derivative of payoff function. The results indicated no significant impact of internal-oriented sourcing strategies while positive impact of external-oriented sourcing strategy on small firm performance. This intriguing result could be explained on the basis of various resource and capital constraints of small firms. Small firms typically have restricted financial and human resources. They do not have enough assets to always develop knowledge internally. Another possible explanation is competency traps or core rigidities. Building up a knowledge base based on internal knowledge creates core competences, but at the same time, excessive internal focused knowledge exploration leads to behaviors blind to other knowledge. Interestingly, this study found that Internal- and external-oriented knowledge sourcing strategies had a substitutive relationship, which was inconsistent with previous studies that suggested complementary relationship between them. This result might be explained using organizational identification theory. Internal organizational members may perceive external knowledge as a threat, and tend to ignore knowledge from external sources because they prefer to maintain their own knowledge, legitimacy, and homogeneous attitudes. Therefore, integrating knowledge from internal and external sources might not be effective, resulting in failure of improvements of firm performance. Another possible explanation is small firms resource and capital constraints and lack of management expertise and absorptive capacity. Although the integration of different knowledge sources is critical, high levels of knowledge sourcing in many areas are quite expensive and so are often unrealistic for small enterprises. This study provides several implications for research as well as practice. First this study extends the existing knowledge by examining the substitutability (and complementarity) of knowledge sourcing strategies. Most prior studies have tended to investigate the independent effects of these strategies on performance without considering their combined impacts. Furthermore, this study tests complementarity based on the productivity approach that has been considered as a definitive test method for complementarity. Second, this study sheds new light on knowledge management research by identifying the relationship between knowledge sourcing strategies and small firm performance. Most current literature has insisted complementary relationship between knowledge sourcing strategies on the basis of data from large firms. Contrary to the conventional wisdom, this study identifies substitutive relationship between knowledge sourcing strategies using data from small firms. Third, implications for practice highlight that managers of small firms should focus on knowledge sourcing from external-oriented strategies. Moreover, adoption of both sourcing strategies simultaneousiy impedes small firm performance.

Optimal Selection of Classifier Ensemble Using Genetic Algorithms (유전자 알고리즘을 이용한 분류자 앙상블의 최적 선택)

  • Kim, Myung-Jong
    • Journal of Intelligence and Information Systems
    • /
    • v.16 no.4
    • /
    • pp.99-112
    • /
    • 2010
  • Ensemble learning is a method for improving the performance of classification and prediction algorithms. It is a method for finding a highly accurateclassifier on the training set by constructing and combining an ensemble of weak classifiers, each of which needs only to be moderately accurate on the training set. Ensemble learning has received considerable attention from machine learning and artificial intelligence fields because of its remarkable performance improvement and flexible integration with the traditional learning algorithms such as decision tree (DT), neural networks (NN), and SVM, etc. In those researches, all of DT ensemble studies have demonstrated impressive improvements in the generalization behavior of DT, while NN and SVM ensemble studies have not shown remarkable performance as shown in DT ensembles. Recently, several works have reported that the performance of ensemble can be degraded where multiple classifiers of an ensemble are highly correlated with, and thereby result in multicollinearity problem, which leads to performance degradation of the ensemble. They have also proposed the differentiated learning strategies to cope with performance degradation problem. Hansen and Salamon (1990) insisted that it is necessary and sufficient for the performance enhancement of an ensemble that the ensemble should contain diverse classifiers. Breiman (1996) explored that ensemble learning can increase the performance of unstable learning algorithms, but does not show remarkable performance improvement on stable learning algorithms. Unstable learning algorithms such as decision tree learners are sensitive to the change of the training data, and thus small changes in the training data can yield large changes in the generated classifiers. Therefore, ensemble with unstable learning algorithms can guarantee some diversity among the classifiers. To the contrary, stable learning algorithms such as NN and SVM generate similar classifiers in spite of small changes of the training data, and thus the correlation among the resulting classifiers is very high. This high correlation results in multicollinearity problem, which leads to performance degradation of the ensemble. Kim,s work (2009) showedthe performance comparison in bankruptcy prediction on Korea firms using tradition prediction algorithms such as NN, DT, and SVM. It reports that stable learning algorithms such as NN and SVM have higher predictability than the unstable DT. Meanwhile, with respect to their ensemble learning, DT ensemble shows the more improved performance than NN and SVM ensemble. Further analysis with variance inflation factor (VIF) analysis empirically proves that performance degradation of ensemble is due to multicollinearity problem. It also proposes that optimization of ensemble is needed to cope with such a problem. This paper proposes a hybrid system for coverage optimization of NN ensemble (CO-NN) in order to improve the performance of NN ensemble. Coverage optimization is a technique of choosing a sub-ensemble from an original ensemble to guarantee the diversity of classifiers in coverage optimization process. CO-NN uses GA which has been widely used for various optimization problems to deal with the coverage optimization problem. The GA chromosomes for the coverage optimization are encoded into binary strings, each bit of which indicates individual classifier. The fitness function is defined as maximization of error reduction and a constraint of variance inflation factor (VIF), which is one of the generally used methods to measure multicollinearity, is added to insure the diversity of classifiers by removing high correlation among the classifiers. We use Microsoft Excel and the GAs software package called Evolver. Experiments on company failure prediction have shown that CO-NN is effectively applied in the stable performance enhancement of NNensembles through the choice of classifiers by considering the correlations of the ensemble. The classifiers which have the potential multicollinearity problem are removed by the coverage optimization process of CO-NN and thereby CO-NN has shown higher performance than a single NN classifier and NN ensemble at 1% significance level, and DT ensemble at 5% significance level. However, there remain further research issues. First, decision optimization process to find optimal combination function should be considered in further research. Secondly, various learning strategies to deal with data noise should be introduced in more advanced further researches in the future.

Development of Marker-free Transgenic Rice Expressing the Wheat Storage Protein, Glu-1Dy10, for Increasing Quality Processing of Bread and Noodles (빵과 면의 가공적성 증진을 위한 밀 저장단백질 Glu-1Dy10을 발현하는 마커프리 형질전환 벼 개발)

  • Park, Soo-Kwon;Shin, DongJin;Hwang, Woon-Ha;Hur, Yeon-Jae;Kim, Tae-Heon;Oh, Se-Yun;Cho, Jun-Hyun;Han, Sang-Ik;Lee, Seung-Sik;Nam, Min-Hee;Park, Dong-Soo
    • Journal of Life Science
    • /
    • v.24 no.6
    • /
    • pp.618-625
    • /
    • 2014
  • Rice flour is used in many food products. However, dough made from rice lacks extensibility and elasticity, making it less suitable than wheat for many food products such as bread and noodles. The high-molecular weight glutenin subunits (HMW-GS) of wheat play a crucial role in determining the processing properties of the wheat grain. This paper describes the development of marker-free transgenic rice plants expressing a wheat Glu-Dy10 gene encoding the HMG-GS from the Korean wheat cultivar 'Jokyeong' using Agrobacterium-mediated co-transformation. Two expression cassettes, consisting of separate DNA fragments containing Glu-1Dy10 and hygromycin phosphotransferase II (HPTII) resistance genes, were introduced separately into Agrobacterium tumefaciens EHA105 for co-infection. Each EHA105 strain harboring Glu-1Dy10 or HPTII was infected into rice calli at a 3: 1 ratio of Glu-1Bx7 and HPTII. Among 290 hygromycin-resistant $T_0$ plants, we obtained 29 transgenic lines with both the Glu-1Dy10 and HPTII genes inserted into the rice genome. We reconfirmed the integration of the Glu-1Dy10 gene into the rice genome by Southern blot analysis. Transcripts and proteins of the Glu-1Dy10 in transgenic rice seeds were examined by semi-quantitative RT-PCR and Western blot analysis. The marker-free plants containing only the Glu-1Dy10 gene were successfully screened in the $T_1$ generation.

A Comparative Analysis on Inquiry Activities in Geology of High School Earth Science Textbooks of Korea and the U.S. (한국과 미국 고등학교 지구과학 교과서의 지질학 탐구활동의 비교 분석)

  • Bae, Hyun-Kyung;Chung, Gong-Soo
    • Journal of the Korean earth science society
    • /
    • v.29 no.7
    • /
    • pp.626-639
    • /
    • 2008
  • To present the suggestions for improvement in science textbooks of high school, scientific inquiry activities in geology of earth science textbooks of Korea and the U.S. were assessed in the areas of the contents, processes and contexts. Regarding the contents of inquiry activities, Korean textbooks contain more number of inquiry activities (5.8 per section) than the U.S. curriculums (4 per section). Inquiry activities of Korean textbooks mostly fall on the interpretation of diagrams and graphs whereas those of the U.S. textbooks more hands-on experiment, data transformation and self designing. As for the number of inquiry process skills per inquiry activity, Korean curriculums contain an average of 1.8 whereas the American ones 3. It suggests that the U.S. textbooks require more integrated process skills than the Korean earth science curriculums. In the process skills of all textbooks studied, the highest frequent elements were inferring and data interpretation; the percentage of these two elements was an average of 73.3% in Korean textbooks and 46.2% in the U.S. This suggests that the Korean textbooks emphasize the process skill on particular area, and uneven distribution of elements of process skills may hinder the development of integration ability of students. particularly in the integrated process skills, the U.S. textbooks presented all 7 elements, while Korean ones presented only 2 to 4 elements, indicating that the Korean textbooks may have weak points in providing various inquiry activities for students compared to the American textbooks. In inquiry context analysis, Korean curriculums provide simplistic inquiry contexts and low applicability to real life whereas the U.S. curriculums provide more integrated inquiry contexts and high applicability to real life.

Problems Analysis and Revitalization Plan of Urban Development Projects by the Land Readjustment Method (환지방식에 의한 도시개발사업의 문제분석 및 활성화대책)

  • Kim, Hyoung-Soo;Lee, Young-Dai;Lee, Jun-Yong
    • Korean Journal of Construction Engineering and Management
    • /
    • v.10 no.5
    • /
    • pp.37-46
    • /
    • 2009
  • This research will focus on the public agencies, designers, supervisors, building cooperation, and contractor who involved in urban development plan. By understanding the complexity and the priorities in urban development process, all problems of the urban development projects can be solved or improved. These priorities are specified using AHP (Analytic Hierarchy Process). A questionnaire survey is employed to identify the problems of urban development process and the methods of revitalizing urban in this research. Through the survey, 35 issues are drawn out. Factor analysis technique is applied to extract the underlying interrelationships possibly existing. Using latent root criterion and varimax rotation method, 9 factors are extracted(by using 34 issues after deleting 1 issue less than 0.4 of factor loading) These 9 factors named as PIF (Problem Improvement Factor) consist of integration estimation (PIF1), cooperation operation capability (PIF2), contractor corporation capability (PIF3), capital for infrastructure investment (PIF4), misunderstanding of effective land use (PIF5), financial capability (PIF6), obscure goal of project (PIF7), shortage of cooperation expertise (PIFS), administrative procedures (PIF9). PIF 6 is the most important factor and PIF 1 is the most widely effective factor to succeed urban land development projects. It is recognized that administrative office is most responsible for PIF1 cooperation is most responsible for PIF2, 7, 8 and 9; contractors is most responsible for PIF3 and PIF6; administrative agencies is most responsible for PIF4; cooperation and consultants are responsible for PIF5. From findings in this study, some suggestions are proposed for the revitalization methods of urban development projects through the land readjustment method.

Analysis of Climate Change Adaptation Researches Related to Health in South Korea (한국의 건강 분야 기후변화적응 연구동향 분석)

  • Ha, Jongsik
    • Journal of Climate Change Research
    • /
    • v.5 no.2
    • /
    • pp.139-151
    • /
    • 2014
  • It is increasingly supported by scientific evidence that greenhouse gas caused by human activities is changing the global climate. In particular, the changing climate has affected human health, directly or indirectly, and its adverse impacts are estimated to increase in the future. In response, many countries have established and implemented a variety of mitigation and adaptation measures. However, it is significant to note that climate change will continue over the next few centuries and its impacts on human health should be tackled urgently. The purpose of this paper is to examine domestic policies and research in health sector in adaptation to climate change. It further aims to recommend future research directions for enhanced response to climate change in public health sector, by reviewing a series of adaptation policies in the selected countries and taking into account the general features of health adaptation policies. In this regard, this study first evaluates the current adaptation policies in public health sector by examining the National Climate Change Adaptation Master Plan(2011~2015) and Comprehensive Plan for Environment and Health(2011~2020) and reviewing research to date of the government and relevant institutions. For the literature review, two information service systems are used: namely, the National Science and Technology Information Service(NTIS) and the Policy Research Information Service & Management(PRISM). Secondly, a series of foreign adaptation policies are selected based on the global research priorities set by WHO (2009) and reviewed in order to draw implications for domestic research. Finally, the barriers or constraints in establishing and implementing health adaptation policies are analyzed qualitatively, considering the general characteristics of adaptation in the health sector to climate change, which include uncertainty, finance, technology, institutions, and public awareness. This study provides four major recommendations: to mainstream health sector in the field of adaptation policy and research; to integrate cross-sectoral adaptation measures with an aim to the improvement of health and well-being of the society; to enhance the adaptation measures based on evidence and cost-effectiveness analysis; and to facilitate systemization in health adaptation through setting the key players and the agenda.

A Study on the Effect of the Document Summarization Technique on the Fake News Detection Model (문서 요약 기법이 가짜 뉴스 탐지 모형에 미치는 영향에 관한 연구)

  • Shim, Jae-Seung;Won, Ha-Ram;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.3
    • /
    • pp.201-220
    • /
    • 2019
  • Fake news has emerged as a significant issue over the last few years, igniting discussions and research on how to solve this problem. In particular, studies on automated fact-checking and fake news detection using artificial intelligence and text analysis techniques have drawn attention. Fake news detection research entails a form of document classification; thus, document classification techniques have been widely used in this type of research. However, document summarization techniques have been inconspicuous in this field. At the same time, automatic news summarization services have become popular, and a recent study found that the use of news summarized through abstractive summarization has strengthened the predictive performance of fake news detection models. Therefore, the need to study the integration of document summarization technology in the domestic news data environment has become evident. In order to examine the effect of extractive summarization on the fake news detection model, we first summarized news articles through extractive summarization. Second, we created a summarized news-based detection model. Finally, we compared our model with the full-text-based detection model. The study found that BPN(Back Propagation Neural Network) and SVM(Support Vector Machine) did not exhibit a large difference in performance; however, for DT(Decision Tree), the full-text-based model demonstrated a somewhat better performance. In the case of LR(Logistic Regression), our model exhibited the superior performance. Nonetheless, the results did not show a statistically significant difference between our model and the full-text-based model. Therefore, when the summary is applied, at least the core information of the fake news is preserved, and the LR-based model can confirm the possibility of performance improvement. This study features an experimental application of extractive summarization in fake news detection research by employing various machine-learning algorithms. The study's limitations are, essentially, the relatively small amount of data and the lack of comparison between various summarization technologies. Therefore, an in-depth analysis that applies various analytical techniques to a larger data volume would be helpful in the future.

An integrated Method of New Casuistry and Specified Principlism as Nursing Ethics Methodology (새로운 간호윤리학 방법론;통합된 사례방법론)

  • Um, Young-Rhan
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.3 no.1
    • /
    • pp.51-64
    • /
    • 1997
  • The purpose of the study was to introduce an integrated approach of new Casuistry and specified principlism in resolving ethical problems and studying nursing ethics. In studying clinical ethics and nursing ethics, there is no systematic research method. While nurses often experience ethical dilemmas in practice, much of previous research on nursing ethics has focused merely on describing the existing problems. In addition, ethists presented theoretical analysis and critics rather than providing the specific problems solving strategies. There is a need in clinical situations for an integrated method which can provide the objective description for existing problem situations as well as specific problem solving methods. We inherit two distinct ways of discussing ethical issues. One of these frames these issues in terms of principles, rules, and other general ideas; the other focuses on the specific features of particular kinds of moral cases. In the first way general ethical rules relate to specific moral cases in a theoretical manner, with universal rules serving as "axioms" from which particular moral judgments are deduced as theorems. In the seconds, this relation is frankly practical. with general moral rules serving as "maxims", which can be fully understood only in terms of the paradigmatic cases that define their meaning and force. Theoretical arguments are structured in ways that free them from any dependence on the circumstances of their presentation and ensure them a validity of a kind that is not affected by the practical context of use. In formal arguments particular conclusions are deduced from("entailed by") the initial axioms or universal principles that are the apex of the argument. So the truth or certainty that attaches to those axioms flows downward to the specific instances to be "proved". In the language of formal logic, the axioms are major premises, the facts that specify the present instance are minor premises, and the conclusion to be "proved" is deduced (follows necessarily) from the initial presises. Practical arguments, by contrast, involve a wider range of factors than formal deductions and are read with an eye to their occasion of use. Instead of aiming at strict entailments, they draw on the outcomes of previous experience, carrying over the procedures used to resolve earlier problems and reapply them in new problmatic situations. Practical arguments depend for their power on how closely the present circumstances resemble those of the earlier precedent cases for which this particular type of argument was originally devised. So. in practical arguments, the truths and certitudes established in the precedent cases pass sideways, so as to provide "resolutions" of later problems. In the language of rational analysis, the facts of the present case define the gounds on which any resolution must be based; the general considerations that carried wight in similar situations provide warrants that help settle future cases. So the resolution of any problem holds good presumptively; its strengh depends on the similarities between the present case and the prededents; and its soundness can be challenged (or rebutted) in situations that are recognized ans exceptional. Jonsen & Toulmin (1988), and Jonsen (1991) introduce New Casuistry as a practical method. The oxford English Dictionary defines casuistry quite accurately as "that part of ethics which resolves cases of conscience, applying the general rules of religion and morality to particular instances in which circumstances alter cases or in which there appears to be a conflict of duties." They modified the casuistry of the medieval ages to use in clinical situations which is characterized by "the typology of cases and the analogy as an inference method". A case is the unit of analysis. The structure of case was made with interaction of situation and moral rules. The situation is what surrounds or stands around. The moral rule is the essence of case. The analogy can be objective because "the grounds, the warrants, the theoretical backing, the modal qualifiers" are identified in the cases. The specified principlism was the method that Degrazia (1992) integrated the principlism and the specification introduced by Richardson (1990). In this method, the principle is specified by adding information about limitations of the scope and restricting the range of the principle. This should be substantive qualifications. The integrated method is an combination of the New Casuistry and the specified principlism. For example, the study was "Ethical problems experienced by nurses in the care of terminally ill patients"(Um, 1994). A semi-structured in-depth interview was conducted for fifteen nurses who mainly took care of terminally ill patients. The first stage, twenty one cases were identified as relevant to the topic, and then were classified to four types of problems. For instance, one of these types was the patient's refusal of care. The second stage, the ethical problems in the case were defined, and then the case was analyzed. This was to analyze the reasons, the ethical values, and the related ethical principles in the cases. Then the interpretation was synthetically done by integration of the result of analysis and the situation. The third stage was the ordering phase of the cases, which was done according to the result of the interpretation and the common principles in the cases. The first two stages describe the methodology of new casuistry, and the final stage was for the methodology of the specified principlism. The common principles were the principle of autonomy and the principle of caring. The principle of autonomy was specified; when competent patients refused care, nurse should discontinue the care to respect for the patients' decision. The principle of caring was also specified; when the competent patients refused care, nurses should continue to provide the care in spite of the patients' refusal to preserve their life. These specification may lead the opposite behavior, which emphasizes the importance of nurse's will and intentions to make their decision in the clinical situations.

  • PDF