• Title/Summary/Keyword: Flow failure

Search Result 874, Processing Time 0.032 seconds

Hybrid Off-pump Coronary Artery Bypass Combined with Percutaneous Coronary Intervention: Indications and Early Results (심폐바이패스 없이 시행하는 관상동맥우회술과 경피적 관상동맥중재술의 병합요법 : 적응증 및 조기성적)

  • Hwang Ho Young;Kim Jin Hyun;Cho Kwang Ree;Kim Ki-Bong
    • Journal of Chest Surgery
    • /
    • v.38 no.11 s.256
    • /
    • pp.733-738
    • /
    • 2005
  • Background: The possibility of incomplete revascularization and development of flow competition after revascularization of the borderline lesion made the hybrid strategy as an option for complete revascularization. Material and Method: From January f998 to July 2004, 25 $(3.2\%)$ patients underwent hybrid revascularization among 782 total OPCAB procedures. Clinical results and angiographic patencies were evalulated. Percutaneous coronary intervention (PCI) was peformed before CABG in 8 patients and after CABG in 47 patients. Result: The causes of PCIs before CABG were to achieve complete revascularization with minimally invasive surgery (n=7) and emergent PCI for culprit lesion (n=1). The indications of PCIs after CABG were high possibility of flow competition in the borderline lesion of right coronary artery territory (n=8), diffuse atheromatous lesion preventing anastomosis of graft (n=5), severe calcified ascending aorta with no more arterial grafi available (n=3), and intramyocardial coronary lesion (n=1). Mean number of distal anastomoses was $2.3\pm1.0$. Mean number of lesions treated by PCI was $1.2\pm0.4$. There was no operative or procedure-related mortality. PCI-related complication was periprocedural myocardial infarction in one patient, and complications related to CABG were transient atrial fibrillation (n=5), perioperative myocardial infarction (n=1), and transient renal dysfunction (n=1). Early postoperative coronary angiography $(1.8{pm}1.6days)$ revealed $100\%$ patency rate of grafts (57/57). The stenosis occurred in one patient performed PCI before CABG, which was successfully treated with re-ballooning. During midterm follow-up (mean; $25{\pm}26$ months), 1 patient died of congestive heart failure. All survivors (n=24) accomplished follow-up coronary angiographics, which showed .all grafts (56/57) were patent except one string sign. In-stent restenosis was developed in 2 patients who received bare metal stents. Conclusion: In selected patients, complete revascularization was achieved with low risk by taking the hybrid strategy.

Effect of Pressure Rise Time on Tidal Volume and Gas Exchange During Pressure Control Ventilation (압력조절환기법에서 압력상승시간(Pressure Rise Time)이 흡기 일환기량 및 가스교환에 미치는 영향)

  • Jeoung, Byung-O;Koh, Youn-Suck;Shim, Tae-Sun;Lee, Sang-Do;Kim, Woo-Sung;Kim, Dong-Soon;Kim, Won-Dong;Lim, Chae-Man
    • Tuberculosis and Respiratory Diseases
    • /
    • v.48 no.5
    • /
    • pp.766-772
    • /
    • 2000
  • Background : Pressure rise time (PRT) is the time in which the ventilator aclieves the set airway pressure in pressure-targeted modes, such as pressure control ventilation (PCV). With varying PRT, in principle, the peak inspiratory flow rate of the ventilator also varies. And if PRT is set to a shorter duration, the effective duration of target pressure level would be prolonged, which in turn would increase inspiratory tidal volume(Vti) and mean airway pressure (Pmean). We also postulated that the increase in Vti with shortening of PRT may relate inversely to the patients' basal airway resistance. Methods : In 13 paralyzed patients on PCV (pressure control 18$\pm$9.5 cm $H_2O$ $FIO_2\;0.6\pm0.3$, PEEP 5$\pm$3 cm $H_2O$, f 20/min, I : E1 : 2) with Servo 300 (Siemens-Elema, Solna, Sweden) from various causes of respiratory failure, PRT of 10 %, 5 % and 0 % were randomly applied. At 30 min of each PRT trial, peak inspiratory flow (PIF, L/sec), Vti (ml), Pmean (cm $H_2O$) and ABGA were determined. Results : At PRT 10%, 5%, and 0%, PIF were 0.69$\pm$0.13, 0.77$\pm$0.19, 0.83$\pm$0.22, respectively (p<0.001). Vti were 425$\pm$94, 439$\pm$101, 456$\pm$106, respectively (p<0.001), and Pmean were 11.2$\pm$3.7, 12.0$\pm$3.7, 12.5$\pm$3.8, respectively (p<0.001). pH were 7.40$\pm$0.08, 7.40$\pm$0.92, 7.41$\pm$0.96, respectively (p=0.00) ; $PaCO_2$ (mm Hg) were 47.4$\pm$15.8, 47.2 $\pm$15.7, 44.6$\pm$16.2, respectively (p=0.004) ; $PAO_2-PaO_2$ (mm Hg) were 220$\pm$98, 224$\pm$95, 227$\pm$94, respectively (p=0.004) ; and $V_n/V_T$ as determined by ($PaCO_2-P_E-CO_2$)/$PaCO_2$ were 0.67$\pm$0.07, 0.67$\pm$0.08, 0.66$\pm$0.08, respectively (p=0.007). The correlation between airway resistance and change of Vti from PRT 10% to 0% were r= -0.243 (p=0.498). Conclusion : Shortening of pressure rise timee during PCV was associated with increased tidal volume, increased mean airway pressure and lower $PaCO_2$.

  • PDF

A Study on the long-term Hemodialysis patient중s hypotension and preventation from Blood loss in coil during the Hemodialysis (장기혈액투석환자의 투석중 혈압하강과 Coil내 혈액손실 방지를 위한 기초조사)

  • 박순옥
    • Journal of Korean Academy of Nursing
    • /
    • v.11 no.2
    • /
    • pp.83-104
    • /
    • 1981
  • Hemodialysis is essential treatment for the chronic renal failure patient's long-term cure and for the patient management before and after kidney transplantation. It sustains the endstage renal failure patient's life which didn't get well despite strict regimen and furthermore it becomes an essential treatment to maintain civil life. Bursing implementation in hemodialysis may affect the significant effect on patient's life. The purpose of this study was to obtain the basic data to solve the hypotension problem encountable to patient and the blood loss problem affecting hemodialysis patient'a anemic states by incomplete rinsing of blood in coil through all process of hemodialysis. The subjects for this study were 44 patients treated hemodialysis 691 times in the hemodialysis unit, The .data was collected at Gang Nam 51. Mary's Hospital from January 1, 1981 to April 30, 1981 by using the direct observation method and the clinical laboratory test for laboratory data and body weight and was analysed by the use of analysis of Chi-square, t-test and anlysis of varience. The results obtained an follows; A. On clinical laboratory data and other data by dialysis Procedure. The average initial body weight was 2.37 ± 0.97kg, and average body weight after every dialysis was 2.33 ± 0.9kg. The subject's average hemoglobin was 7.05±1.93gm/dl and average hematocrit was 20.84± 3.82%. Average initial blood pressure was 174.03±23,75mmHg and after dialysis was 158.45±25.08mmHg. The subject's average blood ion due to blood sample for laboratory data was 32.78±13.49cc/ month. The subject's average blood replacement for blood complementation was 1.31 ±0.88 pint/ month for every patient. B. On the hypotensive state and the coping approaches occurrence rate of hypotension was 28.08%. It was 194 cases among 691 times. 1. In degrees of initial blood pressure, the most 36.6% was in the group of 150-179mmHg, and in degrees of hypotension during dialysis, the most 28.9% in the group of 40-50mmHg, especially if the initial blood pressure was under 180mmHg, 59.8% clinical symptoms appeared in the group of“above 20mmHg of hypotension”. If initial blood pressure was above 180mmHg, 34.2% of clinical symptoms were appeared in the group of“above 40mmHg of hypotension”. These tendencies showed the higher initial blood pressure and the stronger degree of hypotension, these results showed statistically singificant differences. (P=0.0000) 2. Of the occuring times of hypotension,“after 3 hrs”were 29.4%, the longer the dialyzing procedure, the stronger degree of hypotension ann these showed statistically significant differences. (P=0.0142). 3. Of the dispersion of symptoms observed, sweat and flush were 43.3%, and Yawning, and dizziness 37.6%. These were the important symptoms implying hypotension during hemodialysis accordingly. Strages of procedures in coping with hypotension were as follows ; 45.9% were recovered by reducing the blood flow rate from 200cc/min to 1 00cc/min, and by reducing venous pressure to 0-30mmHg. 33.51% were recovered by controling (adjusting) blood flow rate and by infusion of 300cc of 0,9% Normal saline. 4.1% were recovered by infusion of over 300cc of 0.9% normal saline. 3.6% by substituting Nor-epinephiine, 5.7% by substituting blood transfusion, and 7,2% by substituting Albumin were recovered. And the stronger the degree of symptoms observed in hypotention, the more the treatments required for recovery and these showed statistically significant differences (P=0.0000). C. On the effects of the changes of blood pressure and osmolality by albumin and hemofiltration. 1. Changes of blood pressure in the group which didn't required treatment in hypotension and the group required treatment, were averaged 21.5mmHg and 44.82mmHg. So the difference in the latter was bigger than the former and these showed statistically significant difference (P=0.002). On the changes of osmolality, average mean were 12.65mOsm, and 17.57mOsm. So the difference was bigger in the latter than in the former but these not showed statistically significance (P=0.323). 2. Changes of blood pressure in the group infused albumin and in the group didn't required treatment in hypotension, were averaged 30mmHg and 21.5mmHg. So there was no significant differences and it showed no statistical significance (P=0.503). Changes of osmolality were averaged 5.63mOsm and 12.65mOsm. So the difference was smaller in the former but these was no stitistical significance (P=0.287). Changes of blood pressure in the group infused Albumin and in the group required treatment in hypotension were averaged 30mmHg and 44.82mmHg. So the difference was smaller in the former but there is no significant difference (P=0.061). Changes of osmolality were averaged 8.63mOsm, and 17.59mOsm. So the difference were smaller in the former but these not showed statistically significance (P=0.093). 3. Changes of blood pressure in the group iutplemented hemofiltration and in the Uoup didn't required treatment in hypotension were averaged 22mmHg and 21.5mmHg. So there was no significant differences and also these showed no statistical significance (P=0.320). Changes of osmolality were averaged 0.4mOsm and 12.65mOsm. So the difference was smaller in the former but these not showed statistical significance(P=0.199). Changes of blood pressure in the group implemented hemofiltration and in the group required treatment in hypotension were averaged 22mmHg and 44.82mmHg. So the difference was smatter in the former and these showed statistically significant differences (P=0.035). Changes of osmolality were averaged 0.4mOsm and 17.59mOsm. So the difference was smaller in the former but these not showed statistical significance (P=0.086). D. On the changes of body weight, and blood pressure, between the group of hemofiltration and hemodialysis. 1, Changes of body weight in the group implemented hemofiltration and hemodialysis were averaged 3.340 and 3.320. So there was no significant differences and these showed no statistically significant difference, (P=0.185) but standard deviation of body weight averaged in comparison with standard difference of body weight was statistically significant difference (P=0.0000). Change of blood Pressure in the group implemented hemofiltration and hemodialysis were averaged 17.81mmHg and 19.47mmHg. So there was no significant differences and these showed no statistically significant difference (P=0.119), But in comparison with standard deviation about difference of blood pressure was statistically significant difference. (P=0.0000). E. On the blood infusion method in coil after hemodialysis and residual blood losing method in coil. 1, On comparing and analysing Hct of residual blood in coil by factors influencing blood infusion method. Infusion method of saline 200cc reduced residual blood in coil after the quantitative comparison of Saline Occ, 50cc, 100cc, 200cc and the differences showed statistical significance (p < 0.001). Shaking Coil method reduced residual blood in Coil in comparison of Shaking Coil method and Non-Shaking Coil method this showed statistically significant difference (P < 0.05). Adjusting pressure in Coil at OmmHg method reduced residual blood in Coil in comparison of adjusting pressure in Coil at OmmHg and 200mmHg, and this showed statistically significant difference (P < 0.001). 2. Comparing blood infusion method divided into 10 methods in Coil with every factor respectively, there was seldom difference in group of choosing Saline 100cc infusion between Coil at OmmHg. The measured quantity of blood loss was averaged 13.49cc. Shaking Coil method in case of choosing saline 50cc infusion while adjusting pressure in coil at OmmHg was the most effective to reduce residual blood. The measured quantity of blood loss was averaged 15.18cc.

  • PDF

Application of a Single-pulsatile Extracorporeal Life Support System for Extracorporeal Membrane Oxygenation -An experimental study - (단일 박동형 생명구조장치의 인공폐 적용 -실험연구-)

  • Kim, Tae-Sik;Sun, Kyung;Lee, Kyu-Baek;Park, Sung-Young;Hwang, Jae-Joon;Son, Ho-Sung;Kim, Kwang-Taik;Kim. Hyoung-Mook
    • Journal of Chest Surgery
    • /
    • v.37 no.3
    • /
    • pp.201-209
    • /
    • 2004
  • Extracorporeal life support (ECLS) system is a device for respiratory and/or heart failure treatment, and there have been many trials for development and clinical application in the world. Currently, a non-pulsatile blood pump is a standard for ECLS system. Although a pulsatile blood pump is advantageous in physiologic aspects, high pressure generated in the circuits and resultant blood cell trauma remain major concerns which make one reluctant to use a pulsatile blood pump in artificial lung circuits containing a membrane oxygenator. The study was designed to evaluate the hypothesis that placement of a pressure-relieving compliance chamber between a pulsatile pump and a membrane oxygenator might reduce the above mentioned side effects while providing physiologic pulsatile blood flow. The study was performed in a canine model of oleic acid induced acute lung injury (N=16). The animals were divided into three groups according to the type of pump used and the presence of the compliance chamber, In group 1, a non-pulsatile centrifugal pump was used as a control (n=6). In group 2 (n=4), a single-pulsatile pump was used. In group 3 (n=6), a single-pulsatile pump equipped with a compliance chamber was used. The experimental model was a partial bypass between the right atrium and the aorta at a pump flow of 1.8∼2 L/min for 2 hours. The observed parameters were focused on hemodynamic changes, intra-circuit pressure, laboratory studies for blood profile, and the effect on blood cell trauma. In hemodynamics, the pulsatile group II & III generated higher arterial pulse pressure (47$\pm$ 10 and 41 $\pm$ 9 mmHg) than the nonpulsatile group 1 (17 $\pm$ 7 mmHg, p<0.001). The intra-circuit pressure at membrane oxygenator were 222 $\pm$ 8 mmHg in group 1, 739 $\pm$ 35 mmHg in group 2, and 470 $\pm$ 17 mmHg in group 3 (p<0.001). At 2 hour bypass, arterial oxygen partial pressures were significantly higher in the pulsatile group 2 & 3 than in the non-pulsatile group 1 (77 $\pm$ 41 mmHg in group 1, 96 $\pm$ 48 mmHg in group 2, and 97 $\pm$ 25 mmHg in group 3: p<0.05). The levels of plasma free hemoglobin which was an indicator of blood cell trauma were lowest in group 1, highest in group 2, and significantly decreased in group 3 (55.7 $\pm$ 43.3, 162.8 $\pm$ 113.6, 82.5 $\pm$ 25.1 mg%, respectively; p<0.05). Other laboratory findings for blood profile were not different. The above results imply that the pulsatile blood pump is beneficial in oxygenation while deleterious in the aspects to high pressure generation in the circuits and blood cell trauma. However, when a pressure-relieving compliance chamber is applied between the pulsatile pump and a membrane oxygenator, it can significantly reduce the high circuit pressure and result in low blood cell trauma.

Deriving adoption strategies of deep learning open source framework through case studies (딥러닝 오픈소스 프레임워크의 사례연구를 통한 도입 전략 도출)

  • Choi, Eunjoo;Lee, Junyeong;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.27-65
    • /
    • 2020
  • Many companies on information and communication technology make public their own developed AI technology, for example, Google's TensorFlow, Facebook's PyTorch, Microsoft's CNTK. By releasing deep learning open source software to the public, the relationship with the developer community and the artificial intelligence (AI) ecosystem can be strengthened, and users can perform experiment, implementation and improvement of it. Accordingly, the field of machine learning is growing rapidly, and developers are using and reproducing various learning algorithms in each field. Although various analysis of open source software has been made, there is a lack of studies to help develop or use deep learning open source software in the industry. This study thus attempts to derive a strategy for adopting the framework through case studies of a deep learning open source framework. Based on the technology-organization-environment (TOE) framework and literature review related to the adoption of open source software, we employed the case study framework that includes technological factors as perceived relative advantage, perceived compatibility, perceived complexity, and perceived trialability, organizational factors as management support and knowledge & expertise, and environmental factors as availability of technology skills and services, and platform long term viability. We conducted a case study analysis of three companies' adoption cases (two cases of success and one case of failure) and revealed that seven out of eight TOE factors and several factors regarding company, team and resource are significant for the adoption of deep learning open source framework. By organizing the case study analysis results, we provided five important success factors for adopting deep learning framework: the knowledge and expertise of developers in the team, hardware (GPU) environment, data enterprise cooperation system, deep learning framework platform, deep learning framework work tool service. In order for an organization to successfully adopt a deep learning open source framework, at the stage of using the framework, first, the hardware (GPU) environment for AI R&D group must support the knowledge and expertise of the developers in the team. Second, it is necessary to support the use of deep learning frameworks by research developers through collecting and managing data inside and outside the company with a data enterprise cooperation system. Third, deep learning research expertise must be supplemented through cooperation with researchers from academic institutions such as universities and research institutes. Satisfying three procedures in the stage of using the deep learning framework, companies will increase the number of deep learning research developers, the ability to use the deep learning framework, and the support of GPU resource. In the proliferation stage of the deep learning framework, fourth, a company makes the deep learning framework platform that improves the research efficiency and effectiveness of the developers, for example, the optimization of the hardware (GPU) environment automatically. Fifth, the deep learning framework tool service team complements the developers' expertise through sharing the information of the external deep learning open source framework community to the in-house community and activating developer retraining and seminars. To implement the identified five success factors, a step-by-step enterprise procedure for adoption of the deep learning framework was proposed: defining the project problem, confirming whether the deep learning methodology is the right method, confirming whether the deep learning framework is the right tool, using the deep learning framework by the enterprise, spreading the framework of the enterprise. The first three steps (i.e. defining the project problem, confirming whether the deep learning methodology is the right method, and confirming whether the deep learning framework is the right tool) are pre-considerations to adopt a deep learning open source framework. After the three pre-considerations steps are clear, next two steps (i.e. using the deep learning framework by the enterprise and spreading the framework of the enterprise) can be processed. In the fourth step, the knowledge and expertise of developers in the team are important in addition to hardware (GPU) environment and data enterprise cooperation system. In final step, five important factors are realized for a successful adoption of the deep learning open source framework. This study provides strategic implications for companies adopting or using deep learning framework according to the needs of each industry and business.

Numerical Simulation of Dynamic Response of Seabed and Structure due to the Interaction among Seabed, Composite Breakwater and Irregular Waves (II) (불규칙파-해저지반-혼성방파제의 상호작용에 의한 지반과 구조물의 동적응답에 관한 수치시뮬레이션 (II))

  • Lee, Kwang-Ho;Baek, Dong-Jin;Kim, Do-Sam;Kim, Tae-Hyung;Bae, Ki-Seong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.3
    • /
    • pp.174-183
    • /
    • 2014
  • Seabed beneath and near coastal structures may undergo large excess pore water pressure composed of oscillatory and residual components in the case of long durations of high wave loading. This excess pore water pressure may reduce effective stress and, consequently, the seabed may liquefy. If liquefaction occurs in the seabed, the structure may sink, overturn, and eventually increase the failure potential. In this study, to evaluate the liquefaction potential on the seabed, numerical analysis was conducted using the expanded 2-dimensional numerical wave tank to account for an irregular wave field. In the condition of an irregular wave field, the dynamic wave pressure and water flow velocity acting on the seabed and the surface boundary of the composite breakwater structure were estimated. Simulation results were used as input data in a finite element computer program for elastoplastic seabed response. Simulations evaluated the time and spatial variations in excess pore water pressure, effective stress, and liquefaction potential in the seabed. Additionally, the deformation of the seabed and the displacement of the structure as a function of time were quantitatively evaluated. From the results of the analysis, the liquefaction potential at the seabed in front and rear of the composite breakwater was identified. Since the liquefied seabed particles have no resistance to force, scour potential could increase on the seabed. In addition, the strength decrease of the seabed due to the liquefaction can increase the structural motion and significantly influence the stability of the composite breakwater. Due to limitations of allowable paper length, the studied results were divided into two portions; (I) focusing on the dynamic response of structure, acceleration, deformation of seabed, and (II) focusing on the time variation in excess pore water pressure, liquefaction, effective stress path in the seabed. This paper corresponds to (II).

Numerical Simulation of Dynamic Response of Seabed and Structure due to the Interaction among Seabed, Composite Breakwater and Irregular Waves (I) (불규칙파-해저지반-혼성방파제의 상호작용에 의한 지반과 구조물의 동적응답에 관한 수치시뮬레이션 (I))

  • Lee, Kwang-Ho;Baek, Dong-Jin;Kim, Do-Sam;Kim, Tae-Hyung;Bae, Ki-Seong
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.26 no.3
    • /
    • pp.160-173
    • /
    • 2014
  • Seabed beneath and near coastal structures may undergo large excess pore water pressure composed of oscillatory and residual components in the case of long durations of high wave loading. This excess pore water pressure may reduce effective stress and, consequently, the seabed may liquefy. If liquefaction occurs in the seabed, the structure may sink, overturn, and eventually increase the failure potential. In this study, to evaluate the liquefaction potential on the seabed, numerical analysis was conducted using the expanded 2-dimensional numerical wave tank to account for an irregular wave field. In the condition of an irregular wave field, the dynamic wave pressure and water flow velocity acting on the seabed and the surface boundary of the composite breakwater structure were estimated. Simulation results were used as input data in a finite element computer program for elastoplastic seabed response. Simulations evaluated the time and spatial variations in excess pore water pressure, effective stress, and liquefaction potential in the seabed. Additionally, the deformation of the seabed and the displacement of the structure as a function of time were quantitatively evaluated. From the results of the analysis, the liquefaction potential at the seabed in front and rear of the composite breakwater was identified. Since the liquefied seabed particles have no resistance to force, scour potential could increase on the seabed. In addition, the strength decrease of the seabed due to the liquefaction can increase the structural motion and significantly influence the stability of the composite breakwater. Due to limitations of allowable paper length, the studied results were divided into two portions; (I) focusing on the dynamic response of structure, acceleration, deformation of seabed, and (II) focusing on the time variation in excess pore water pressure, liquefaction, effective stress path in the seabed. This paper corresponds to (I).

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Field Control of Paulownia Witches' Broom with Oxytetracycline Hydrochloride (옥시테트라사이클린에 의(依)한 오동나무·빗자루병(病) 방제(防除))

  • La, Yong Joon;Shin, Hyeon Dong
    • Journal of Korean Society of Forest Science
    • /
    • v.49 no.1
    • /
    • pp.6-10
    • /
    • 1980
  • The witches' broom disease of Paulownia tomentosa (Thunb.) Steud with which mycoplasmalike organisms are associated is widespread throughout Korea and poses serious threat to the cultivation of paulownia. Attempt was made to investigate the feasibility of field control of the disease with oxytetracycline hydrochloride (OTC). A total of 84 paulownia trees (6 year-old, DBH: 10-15cm) exhibiting severe symptoms of witches' broom were selected and treated during March to September. Solution of 1-10g of OTC dissolved in 0.1­2.0 of water was transfused into infected trees with gravity flow method from a dark-brown colored plastic reservoir (11 volume) through plastic tubes (1.2m long) connected to 2-4 holes (5 mm in diameter and 4-5cm in depth) bored in the basal part of the tree trunks (Fig. 1 and 2). Of 60 diseased paulownia trees injected with 2g of OTC in 0.1-2.0l of water during May to September, 1979, 58 trees resulted in complete remission of symptom development and resumption of healthy new growth at least up to September, 1980 when the last observation of the effect of OTC treatment for this experiment was made. The rest of two trees were dead probably due to too severe infection. Of 24 paulownia trees treated in March and April, 1979 complete remission of symptom development was obtained with 8 trees, and nine trees were partially prevented from symptom development in the following season. The remaining 7 trees were dead due to failure in uptake of OTC and partly because the trees were in too far advanced stage of infection. Application of highly concentrated solution of 2g of OTC dissolved in 0.1-0.2l of water per tree was just as effective as the 2g/1-2l treatment. Injection of 2g/1-2l required 3-4 days while treatment of 2g/0.1-0.2l reduced the time for injecting one tree down to less than 24 hrs. The result of this experiment demonstrates that basal trunk injection of 2g OTC/0.1-0.2l/tree is feasible for field control of paulownia witches' broom, provided that tree injection is performed in actively growing season (May-September) and at the initial stage of disease development.

  • PDF

COMPARISON OF FLUX AND RESIDENT CONCENTRATION BREAKTHROUGH CURVES IN STRUCTURED SOIL COLUMNS (구조토양에서의 침출수와 잔존수농도의 파과곡선에 관한 비교연구)

  • Kim, Dong-Ju
    • Journal of Korea Soil Environment Society
    • /
    • v.2 no.2
    • /
    • pp.81-94
    • /
    • 1997
  • In many solute transport studies, either flux or resident concentration has been used. Choice of the concentration mode was dependent on the monitoring device in solute displacement experiments. It has been accepted that no priority exists in the selection of concentration mode in the study of solute transport. It would be questionable, however, to accept the equivalency in the solute transport parameters between flux and resident concentrations in structured soils exhibiting preferential movement of solute. In this study, we investigate how they differ in the monitored breakthrough curves (BTCs) and transport parameters for a given boundary and flow condition by performing solute displacement experiments on a number of undisturbed soil columns. Both flux and resident concentrations have been simultaneously obtained by monitoring the effluent and resistance of the horizontally-positioned TDR probes. Two different solute transport models namely, convection-dispersion equation (CDE) and convective lognormal transfer function (CLT) models, were fitted to the observed breakthrough data in order to quantify the difference between two concentration modes. The study reveals that soil columns having relatively high flux densities exhibited great differences in the degree of peak concentration and travel time of peak between flux and resident concentrations. The peak concentration in flux mode was several times higher than that in resident one. Accordingly, the estimated parameters of flux mode differed greatly from those of resident mode and the difference was more pronounced in CDE than CLT model. Especially in CDE model, the parameters of flux mode were much higher than those of resident mode. This was mainly due to the bypassing of solute through soil macropores and failure of the equilibrium CDE model to adequate description of solute transport in studied soils. In the domain of the relationship between the ratio of hydrodynamic dispersion to molecular diffusion and the peclet number, both concentrations fall on a zone of predominant mechanical dispersion. However, it appears that more molecular diffusion contributes to the solute spreading in the matrix region than the macropore region due to the nonliearity present in the pore water velocity and dispersion coefficient relationship.

  • PDF