• Title/Summary/Keyword: PASS

Search Result 5,388, Processing Time 0.042 seconds

Proposal of Establishing a New International Space Agency for Mining the Natural Resources in the Moon, Mars and Other Celestial Bodies

  • Kim, Doo-Hwan
    • The Korean Journal of Air & Space Law and Policy
    • /
    • v.35 no.2
    • /
    • pp.313-374
    • /
    • 2020
  • The idea of creating a new International Space Agency (ISA) is only my academic and practical opinion. It is necessary for us to establish ISA as an international organization for the efficient and rapid exploitation of natural resources in the moon, Mars and other celestial bodies. The establishment of ISA as a new international regime is based on the Article 11, 5 and Article 18 of the 1979 Moon Agreement. In order to establish as a preliminary procedure, it needs to make a "Draft for the Convention on the Establishment of an International Space Agency" among the space-faring countries. In this paper, I was examined the domestic space legislation in the United States, Luxembourg, European Space Agency, China, Japan, the Republic of Korea as well as space exploration and planning of the moons, Mars, Asteroids, Venus, Jupiter, Saturn, Titan and Other Celestial Bodies. The creation of an ISA would lead to a strengthening of the cooperation needed essentially by the developed countries towards joint and cooperative undertakings in space and would act as a catalyst for the space exploration and exploitation of the moon, Mars and other celestial bodies. It will be managed effectively and centrally the exploitation and exploitation of space the natural resources, technology, manpower and finances as an independent organization in order to get the benefit of the space developed countries by ISA. It is desirable and necessary for us to establish ISA in order to promote cooperation in space policy, law, science technology and industry among the space developed countries in the near future. The establishment of the ISA will be promoted the international cooperation among the space-faring countries in exploration and exploitations of the natural resources in the moon and other celestial bodies. I would propose the "Draft for the Convention for the Establishment of an International Space Agency." in refering the "Convention for the Establishment of a European Space Agency." This "Draft for the Convention Convention for the Establishment of an ISA" must pass the abovementioned "Draft for the Convention" by two-third majority of Diplomatic Conference in the UNCOPUOS. Finally, a very important point is need that a political drive at the highest level and a solemn statement by heads of state of the space devloped countries including the United Nations for the space exploitation of the medium and long term. It should be noted that this political drive will be necessary not only to set up the organization, but also during a subsequent period. It is desirable and necessary for us to establish the ISA in order to develop the space industry, to strengthen friendly relations and to promote research cooperation among the space-faring countries based on the new ideology and creative ideas. If the heads of the superpowers including the United Nations will be agreed to establish ISA at a summit conference, 1 am sure that it is possible to establish an ISA in the near future.

The Evaluation of Reconstruction Method Using Attenuation Correction Position Shifting in 3D PET/CT (PET/CT 3D 영상에서 감쇠보정 위치 변화 방법을 이용한 영상 재구성법의 평가)

  • Hong, Gun-Chul;Park, Sun-Myung;Jung, Eun-Kyung;Choi, Choon-Ki;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.2
    • /
    • pp.172-176
    • /
    • 2010
  • Purpose: The patients' moves occurred at PET/CT scan will cause the decline of correctness in results by resulting in inconsistency of Attenuation Correction (AC) and effecting on quantitative evaluation. This study has evaluated the utility of reconstruction method using AC position changing method when having inconsistency of AC depending on the position change of emission scan after transmission scan in obtaining PET/CT 3D image. Materials and Methods: We created 1 mL syringe injection space up to ${\pm}2$, 6, 10 cm toward x and y axis based on central point of polystyrene ($20{\times}20110$ cm) into GE Discovery STE16 equipment. After projection of syringe with $^{18}F$-FDG 5 kBq/mL, made an emission by changing the position and obtained the image by using AC depending on the position change. Reconstruction method is an iteration reconstruction method and is applied two times of iteration and 20 of subset, and for every emission data, decay correction depending on time pass is applied. Also, after setting ROI to the position of syringe, compared %Difference (%D) at each position to radioactivity concentrations (kBq/mL) and central point. Results: Radioactivity concentrations of central point of emission scan is 2.30 kBq/mL and is indicated as 1.95, 1.82 and 1.75 kBq/mL, relatively for +x axis, as 2.07, 1.75 and 1.65 kBq/mL for -x axis, as 2.07, 1.87 and 1.90 kBq/mL for +y axis and as 2.17, 1.85 and 1.67 kBq/mL for -y axis. Also, %D is yield as 15, 20, 23% for +x axis, as 9, 23, 28% for -x axis, as 12, 21, 20% for +y axis and as 8, 22, 29% for -y axis. When using AC position changing method, it is indicated as 2.00, 1.95 and 1.80 kBq/mL, relatively for +x axis, as 2.25, 2.15 and 1.90 kBq/mL for -x axis, as 2.07, 1.90 and 1.90 kBq/mL for +y axis, and as 2.10, 2.02, and 1.72 kBq/mL for -y axis. Also, %D is yield as 13, 15, 21% for +x axis, as 2, 6, 17% for -x axis, as 9, 17, 17% for +y axis, and as 8, 12, 25% for -y axis. Conclusion: When in inconsistency of AC, radioactivity concentrations for using AC position changing method increased average of 0.14, 0.03 kBq/mL at x, y axis and %D was improved 6.1, 4.2%. Also, it is indicated that the more far from the central point and the further position from the central point under the features that spatial resolution is lowered, the higher in lowering of radioactivity concentrations. However, since in actual clinic, attenuation degree increases more, it is considered that when in inconsistency, such tolerance will be increased. Therefore, at the lesion of the part where AC is not inconsistent, the tolerance of radioactivity concentrations will be reduced by applying AC position changing method.

  • PDF

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

A Study on Oriental Medical Diagnosis of Musculoskeletal Disorders using Moire Image (Moire 영상을 이용한 근골격계 질환의 한의학적 진단에 관한 연구)

  • Lee Eun-Kyoung;Yu Seung-Hyun;Lee Su-Kyung;Kang Sung-Ho;Han Jong-Min;Chong Myong-Soo;Chun Eun-Joo;Song Yung-Sun;Lee Ki-Nam
    • Journal of Society of Preventive Korean Medicine
    • /
    • v.4 no.2
    • /
    • pp.72-92
    • /
    • 2000
  • This research has conducted studies on an Oriental medicine-based method of diagnosing of occupational musculoskeletal system diseases. This researcher has searched through existing relevant medical literature. Also, this researcher has worked on a moire topography using moire topography. In this course, this researcher has reached the following conclusion in relation to the possibility of using a moire topography as a diagnosing device of musculoskeletal system diseases under Oriental medicine . 1 The Western medicine outlines its criteria of screening occupational musculoskeletal system diseases as follows A. The occupational musculoskeletal diseases must clearly include one or more of the subjective symptoms characterized by pain, hypoesthesia dysaesthesia, anaesthesia. etc . B, There should be clinically admitted objective observations and diagnosis outlining that the disease concerned shows symptoms such as tenderness, induration. and edema that can appear with occupational musculoskeletal system diseases. dyscinesia should be admitted with the disease concerned, or there should be observations and diagnosis outlining that abnormality exists in electric muscular or nervous diagnosis and examination . C. It should be admitted that prior to the occurrence of symptoms or observations and diagnosis on musculoskeletal system-related diseases, a patient has been engaged in works with conditions requiring improper work posture or work movement. That is, this is an approach whereby they see abnormality in the musculoskeletal system come from material and structural defect, and adjust and control abnormality in the musculoskeletal system and secreta . 2. The Oriental medicines sees that a patient develops the pain of occupational musculoskeletal diseases as he cannot properly activate the flow of his life force and blood thus not only causing formation of lumps in the body and blocking the flow of life force and blood in some parts of the body. Hence, The Oriental medicine focuses on resolving the cause of weakening the flow of life force and blood, instead of taking material approach of correcting structural abnormality Furthermore , Oriental medicine sees that when muscle tension builds up, this presses blood vessels and nerves passing by, triggering circulation dyscrasia and neurological reaction and thus leading to lesion. Thus, instead of taking skeletal or neurophysiological approach. it seeks to fundamentally resolve the cause of the flow of the life force and blood in muscles not being activated. As a result Oriental medicine attributes the main cause of musculoskeletal system diseases to muscle tension and its build-up that stem from an individual's long formed chronicle habit and work environment. This approach considers not only the social structure aspect including companies owners and work environment that the existing methods have looked at, but also individual workers' responsibility and their environmental factors. Hence, this is a step forward method. 3 The diagnosis of musculoskeletal diseases under Oriental medicine is characterized by the fact that an Oriental medicine doctor uses not only photos taken by himself, but also various detection devices to gather information and pass comprehensive judgment on it. Thus, it is the core of diagnosis under Oriental medicine to develop diagnosing devices matching the characteristics of information to be induced and to interpret information so induced from the views of Oriental medicine. Diagnosis using diagnosing devices values the whole state of a patient and formal abnormality alike, and the whole balance and muscular state of a patient serves as the basis of diagnosis. Hence, this method, instead of depending on the information gathered from devices under Western medicine, requires devices that provide information on the whole state of a patient in addition to the local abnormality information that X-ray. CT, etc., can offer. This method sees muscle as the central part of the abnormality in the musculoskeletal system and thus requires diagnosing devices enabling the muscular state. 4. The diagnosing device using moire topography under Oriental medicine has advantages below and can be used for diagnosing musculoskeletal system diseases with industrial workers . First, the device can Provide information on the body in an unbalanced state. and thus identify the imbalance and difference of height in the left and right stature that a patient can not notice at normal times. Second, the device shows the twisting of muscles or induration regions in a contour map. This is not possible with existing shooting machines such as X-ray, CT, etc., thus differentiating itself from existing machines. Third, this device makes it possible for Oriental medicine to take its unique approach to the abnormality in the musculoskeletal system. Oriental medicine sees the state and imbalance state in muscles as major factors in determining the lesion of musculoskeletal system, and the device makes it possible to shoot the state of muscles in detail. In this respect, the device is significant. Fourth, the device has an advantage as non-aggression diagnosing device.

  • PDF

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF

Effect of Dose Rate Variation on Dose Distribution in IMRT with a Dynamic Multileaf Collimator (동적다엽콜리메이터를 이용한 세기변조방사선 치료 시 선량분포상의 선량률 변화에 따른 효과)

  • Lim, Kyoung-Dal;Jae, Young-Wan;Yoon, Il-Kyu;Lee, Jae-Hee;Yoo, Suk-Hyun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.24 no.1
    • /
    • pp.1-10
    • /
    • 2012
  • Purpose: To evaluate dose distribution differences when the dose rates are randomly changed in intensity-modulated radiation therapy using a dynamic multileafcollimator. Materials and Methods: Two IMRT treatment plans including small-field and large-field plans were made using a commercial treatment planning system (Eclipse, Varian, Palo Alto, CA). Each plan had three sub-plans according to various dose rates of 100, 400, and 600 MU/min. A chamber array (2D-Array Seven729, PTW-Freiburg) was positioned between solid water phantom slabs to give measurement depth of 5 cm and backscattering depth of 5 cm. Beam deliveries were performed on the array detector using a 6 MV beam of a linear accelerator (Clinac 21EX, Varian, Palo Alto, CA) equipped with 120-leaf MLC (Millenium 120, Varian). At first, the beam was delivered with same dose rates as planned to obtain reference values. After the standard measurements, dose rates were then changed as follows: 1) for plans with 100 MU/min, dose rate was varied to 200, 300, 400, 500 and 600 MU/min, 2) for plans with 400 MU/min, dose rate was varied to 100, 200, 300, 500 and 600 MU/min, 3) for plans with 600 MU/min, dose rate was varied to 100, 200, 300, 400 and 500 MU/min. Finally, using an analysis software (Verisoft 3.1, PTW-Freiburg), the dose difference and distribution between the reference and dose-rate-varied measurements was evaluated. Results: For the small field plan, the local dose differences were -0.8, -1.1, -1.3, -1.5, and -1.6% for the dose rate of 200, 300, 400, 500, 600 MU/min, respectively (for 100 MU/min reference), +0.9, +0.3, +0.1, -0.2, and -0.2% for the dose rate of 100, 200, 300, 500, 600 MU/min, respectively (for 400 MU/min reference) and +1.4, +0.8, +0.5, +0.3, and +0.2% for the dose rate of 100, 200, 300, 400, 500 MU/min, respectively (for 600 MU/min reference). On the other hand, for the large field plan, the pass-rate differences were -1.3, -1.6, -1.8, -2.0, and -2.4% for the dose rate of 200, 300, 400, 500, 600 MU/min, respectively (for 100 MU/min reference), +2.0, +1.8, +0.5, -1.2, and -1.6% for the dose rate of 100, 200, 300, 500, 600 MU/min, respectively (for 400 MU/min reference) and +1.5, +1.9, +1.7, +1.9, and +1.2% for the dose rate of 100, 200, 300, 400, 500 MU/min, respectively (for 600 MU/min reference). In short, the dose difference of dose-rate variation was measured to the -2.4~+2.0%. Conclusion: Using the Varian linear accelerator with 120 MLC, the IMRT dose distribution is differed a little <(${\pm}3%$) even though the dose-rate is changed.

  • PDF

An Intelligence Support System Research on KTX Rolling Stock Failure Using Case-based Reasoning and Text Mining (사례기반추론과 텍스트마이닝 기법을 활용한 KTX 차량고장 지능형 조치지원시스템 연구)

  • Lee, Hyung Il;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.47-73
    • /
    • 2020
  • KTX rolling stocks are a system consisting of several machines, electrical devices, and components. The maintenance of the rolling stocks requires considerable expertise and experience of maintenance workers. In the event of a rolling stock failure, the knowledge and experience of the maintainer will result in a difference in the quality of the time and work to solve the problem. So, the resulting availability of the vehicle will vary. Although problem solving is generally based on fault manuals, experienced and skilled professionals can quickly diagnose and take actions by applying personal know-how. Since this knowledge exists in a tacit form, it is difficult to pass it on completely to a successor, and there have been studies that have developed a case-based rolling stock expert system to turn it into a data-driven one. Nonetheless, research on the most commonly used KTX rolling stock on the main-line or the development of a system that extracts text meanings and searches for similar cases is still lacking. Therefore, this study proposes an intelligence supporting system that provides an action guide for emerging failures by using the know-how of these rolling stocks maintenance experts as an example of problem solving. For this purpose, the case base was constructed by collecting the rolling stocks failure data generated from 2015 to 2017, and the integrated dictionary was constructed separately through the case base to include the essential terminology and failure codes in consideration of the specialty of the railway rolling stock sector. Based on a deployed case base, a new failure was retrieved from past cases and the top three most similar failure cases were extracted to propose the actual actions of these cases as a diagnostic guide. In this study, various dimensionality reduction measures were applied to calculate similarity by taking into account the meaningful relationship of failure details in order to compensate for the limitations of the method of searching cases by keyword matching in rolling stock failure expert system studies using case-based reasoning in the precedent case-based expert system studies, and their usefulness was verified through experiments. Among the various dimensionality reduction techniques, similar cases were retrieved by applying three algorithms: Non-negative Matrix Factorization(NMF), Latent Semantic Analysis(LSA), and Doc2Vec to extract the characteristics of the failure and measure the cosine distance between the vectors. The precision, recall, and F-measure methods were used to assess the performance of the proposed actions. To compare the performance of dimensionality reduction techniques, the analysis of variance confirmed that the performance differences of the five algorithms were statistically significant, with a comparison between the algorithm that randomly extracts failure cases with identical failure codes and the algorithm that applies cosine similarity directly based on words. In addition, optimal techniques were derived for practical application by verifying differences in performance depending on the number of dimensions for dimensionality reduction. The analysis showed that the performance of the cosine similarity was higher than that of the dimension using Non-negative Matrix Factorization(NMF) and Latent Semantic Analysis(LSA) and the performance of algorithm using Doc2Vec was the highest. Furthermore, in terms of dimensionality reduction techniques, the larger the number of dimensions at the appropriate level, the better the performance was found. Through this study, we confirmed the usefulness of effective methods of extracting characteristics of data and converting unstructured data when applying case-based reasoning based on which most of the attributes are texted in the special field of KTX rolling stock. Text mining is a trend where studies are being conducted for use in many areas, but studies using such text data are still lacking in an environment where there are a number of specialized terms and limited access to data, such as the one we want to use in this study. In this regard, it is significant that the study first presented an intelligent diagnostic system that suggested action by searching for a case by applying text mining techniques to extract the characteristics of the failure to complement keyword-based case searches. It is expected that this will provide implications as basic study for developing diagnostic systems that can be used immediately on the site.

An integrated Method of New Casuistry and Specified Principlism as Nursing Ethics Methodology (새로운 간호윤리학 방법론;통합된 사례방법론)

  • Um, Young-Rhan
    • Journal of Korean Academy of Nursing Administration
    • /
    • v.3 no.1
    • /
    • pp.51-64
    • /
    • 1997
  • The purpose of the study was to introduce an integrated approach of new Casuistry and specified principlism in resolving ethical problems and studying nursing ethics. In studying clinical ethics and nursing ethics, there is no systematic research method. While nurses often experience ethical dilemmas in practice, much of previous research on nursing ethics has focused merely on describing the existing problems. In addition, ethists presented theoretical analysis and critics rather than providing the specific problems solving strategies. There is a need in clinical situations for an integrated method which can provide the objective description for existing problem situations as well as specific problem solving methods. We inherit two distinct ways of discussing ethical issues. One of these frames these issues in terms of principles, rules, and other general ideas; the other focuses on the specific features of particular kinds of moral cases. In the first way general ethical rules relate to specific moral cases in a theoretical manner, with universal rules serving as "axioms" from which particular moral judgments are deduced as theorems. In the seconds, this relation is frankly practical. with general moral rules serving as "maxims", which can be fully understood only in terms of the paradigmatic cases that define their meaning and force. Theoretical arguments are structured in ways that free them from any dependence on the circumstances of their presentation and ensure them a validity of a kind that is not affected by the practical context of use. In formal arguments particular conclusions are deduced from("entailed by") the initial axioms or universal principles that are the apex of the argument. So the truth or certainty that attaches to those axioms flows downward to the specific instances to be "proved". In the language of formal logic, the axioms are major premises, the facts that specify the present instance are minor premises, and the conclusion to be "proved" is deduced (follows necessarily) from the initial presises. Practical arguments, by contrast, involve a wider range of factors than formal deductions and are read with an eye to their occasion of use. Instead of aiming at strict entailments, they draw on the outcomes of previous experience, carrying over the procedures used to resolve earlier problems and reapply them in new problmatic situations. Practical arguments depend for their power on how closely the present circumstances resemble those of the earlier precedent cases for which this particular type of argument was originally devised. So. in practical arguments, the truths and certitudes established in the precedent cases pass sideways, so as to provide "resolutions" of later problems. In the language of rational analysis, the facts of the present case define the gounds on which any resolution must be based; the general considerations that carried wight in similar situations provide warrants that help settle future cases. So the resolution of any problem holds good presumptively; its strengh depends on the similarities between the present case and the prededents; and its soundness can be challenged (or rebutted) in situations that are recognized ans exceptional. Jonsen & Toulmin (1988), and Jonsen (1991) introduce New Casuistry as a practical method. The oxford English Dictionary defines casuistry quite accurately as "that part of ethics which resolves cases of conscience, applying the general rules of religion and morality to particular instances in which circumstances alter cases or in which there appears to be a conflict of duties." They modified the casuistry of the medieval ages to use in clinical situations which is characterized by "the typology of cases and the analogy as an inference method". A case is the unit of analysis. The structure of case was made with interaction of situation and moral rules. The situation is what surrounds or stands around. The moral rule is the essence of case. The analogy can be objective because "the grounds, the warrants, the theoretical backing, the modal qualifiers" are identified in the cases. The specified principlism was the method that Degrazia (1992) integrated the principlism and the specification introduced by Richardson (1990). In this method, the principle is specified by adding information about limitations of the scope and restricting the range of the principle. This should be substantive qualifications. The integrated method is an combination of the New Casuistry and the specified principlism. For example, the study was "Ethical problems experienced by nurses in the care of terminally ill patients"(Um, 1994). A semi-structured in-depth interview was conducted for fifteen nurses who mainly took care of terminally ill patients. The first stage, twenty one cases were identified as relevant to the topic, and then were classified to four types of problems. For instance, one of these types was the patient's refusal of care. The second stage, the ethical problems in the case were defined, and then the case was analyzed. This was to analyze the reasons, the ethical values, and the related ethical principles in the cases. Then the interpretation was synthetically done by integration of the result of analysis and the situation. The third stage was the ordering phase of the cases, which was done according to the result of the interpretation and the common principles in the cases. The first two stages describe the methodology of new casuistry, and the final stage was for the methodology of the specified principlism. The common principles were the principle of autonomy and the principle of caring. The principle of autonomy was specified; when competent patients refused care, nurse should discontinue the care to respect for the patients' decision. The principle of caring was also specified; when the competent patients refused care, nurses should continue to provide the care in spite of the patients' refusal to preserve their life. These specification may lead the opposite behavior, which emphasizes the importance of nurse's will and intentions to make their decision in the clinical situations.

  • PDF

Clinical Outcomes of Off-pump Coronary Artery Bypass Grafting (심폐바이패스 없는 관상동맥우회술의 임상성적)

  • Shin, Je-Kyoun;Kim, Jeong-Won;Jung, Jong-Pil;Park, Chang-Ryul;Park, Soon-Eun
    • Journal of Chest Surgery
    • /
    • v.41 no.1
    • /
    • pp.34-40
    • /
    • 2008
  • Background: Off-pump coronary artery bypass grafting (OPCAB) shows fewer side effects than cardiopulmonary by. pass, and other benefits include myocardial protection, pulmonary and renal protection, coagulation, inflammation, and cognitive function. We analyzed the clinical results of our cases of OPCAB. Material and Method: From May 1999 to August 2007, OPCAB was performed in 100 patients out of a total of 310 coronary artery bypass surgeries. There were 63 males and 37 females, from 29 to 82 years old, with a mean age of $62{\pm}10$ years. The preoperative diagnoses were unstable angina in 77 cases, stable angina in 16, and acute myocardial infarction in 7. The associated diseases were hypertension in 48 cases, diabetes in 42, chronic renal failure in 10, carotid artery disease in 6, and chronic obstructive pulmonary disease in 5. The preoperative cardiac ejection fraction ranged from 26% to 74% (mean $56.7{\pm}11.6%$). Preoperative angiograms showed three-vessel disease in 47 cases, two-vessel disease in 25, one-vessel disease in 24, and left main disease in 23. The internal thoracic artery was harvested by the pedicled technique through a median sternotomy in 97 cases. The radial artery and greater saphenous vein were harvested in 70 and 45 cases, respectively (endoscopic harvest in 53 and 41 cases, respectively). Result: The mean number of grafts was $2.7{\pm}1.2$ per patient, with grafts sourced from the unilateral internal thoracic artery in 95 (95%) cases, the radial artery in 62, the greater saphenous vein in 39, and the bilateral internal thoracic artery in 2. Sequential anastomoses were performed in 46 cases. The anastomosed vessels were the left anterior descending artery in 97 cases, the obtuse marginal branch in 63, the diagonal branch in 53, the right coronary artery in 30, the intermediate branch in 11, the posterior descending artery in 9 and the posterior lateral branch in 3. The conversion to cardiopulmonary bypass occurred in 4 cases. Graft patency was checked before discharge by coronary angiography or multi-slice coronary CT angiography in 72 cases, with a patency rate of 92.9% (184/198). There was one case of mortality due to sepsis. Postoperative arrhythmias or myocardial in-farctions were not observed. Postoperative complications were a cerebral stroke in 1 case and wound infection in 1. The mean time of respirator care was $20{\pm}35$ hours and the mean duration of stay in the intensive care unit was $68{\pm}47$ hours. The mean amounts of blood transfusion were $4.0{\pm}2.6$ packs/patient. Conclusion: We found good clinical outcomes after OPCAB, and suggest that OPCAB could be used to expand the use of coronary artery bypass grafting.

Application of LCA on Lettuce Cropping System by Bottom-up Methodology in Protected Cultivation (시설상추 농가를 대상으로 하는 bottom-up 방식 LCA 방법론의 농업적 적용)

  • Ryu, Jong-Hee;Kim, Kye-Hoon;Kim, Gun-Yeob;So, Kyu-Ho;Kang, Kee-Kyung
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.44 no.6
    • /
    • pp.1195-1206
    • /
    • 2011
  • This study was conducted to apply LCA (Life cycle assessment) methodology to lettuce (Lactuca sativa L.) production systems in Namyang-ju as a case study. Five lettuce growing farms with three different farming systems (two farms with organic farming system, one farm with a system without agricultural chemicals and two farms with conventional farming system) were selected at Namyangju city of Gyeonggi-province in Korea. The input data for LCA were collected by interviewing with the farmers. The system boundary was set at a cropping season without heating and cooling system for reducing uncertainties in data collection and calculation. Sensitivity analysis was carried out to find out the effect of type and amount of fertilizer and energy use on GHG (Greenhouse Gas) emission. The results of establishing GTG (Gate-to-Gate) inventory revealed that the quantity of fertilizer and energy input had the largest value in producing 1 kg lettuce, the amount of pesticide input the smallest. The amount of electricity input was the largest in all farms except farm 1 which purchased seedlings from outside. The quantity of direct field emission of $CO_2$, $CH_4$ and $N_2O$ from farm 1 to farm 5 were 6.79E-03 (farm 1), 8.10E-03 (farm 2), 1.82E-02 (farm 3), 7.51E-02 (farm 4) and 1.61E-02 (farm 5) kg $kg^{-1}$ lettuce, respectively. According to the result of LCI analysis focused on GHG, it was observed that $CO_2$ emission was 2.92E-01 (farm 1), 3.76E-01 (farm 2), 4.11E-01 (farm 3), 9.40E-01 (farm 4) and $5.37E-01kg\;CO_2\;kg^{-1}\;lettuce$ (farm 5), respectively. Carbon dioxide contribute to the most GHG emission. Carbon dioxide was mainly emitted in the process of energy production, which occupied 67~91% of $CO_2$ emission from every production process from 5 farms. Due to higher proportion of $CO_2$ emission from production of compound fertilizer in conventional crop system, conventional crop system had lower proportion of $CO_2$ emission from energy production than organic crop system did. With increasing inorganic fertilizer input, the process of lettuce cultivation covered higher proportion in $N_2O$ emission. Therefore, farms 1 and 2 covered 87% of total $N_2O$ emission; and farm 3 covered 64%. The carbon footprints from farm 1 to farm 5 were 3.40E-01 (farm 1), 4.31E-01 (farm 2), 5.32E-01 (farm 3), 1.08E+00 (farm 4) and 6.14E-01 (farm 5) kg $CO_2$-eq. $kg^{-1}$ lettuce, respectively. Results of sensitivity analysis revealed the soybean meal was the most sensitive among 4 types of fertilizer. The value of compound fertilizer was the least sensitive among every fertilizer imput. Electricity showed the largest sensitivity on $CO_2$ emission. However, the value of $N_2O$ variation was almost zero.