• Title/Summary/Keyword: initial value method

Search Result 1,003, Processing Time 0.04 seconds

A Deep Learning Based Approach to Recognizing Accompanying Status of Smartphone Users Using Multimodal Data (스마트폰 다종 데이터를 활용한 딥러닝 기반의 사용자 동행 상태 인식)

  • Kim, Kilho;Choi, Sangwoo;Chae, Moon-jung;Park, Heewoong;Lee, Jaehong;Park, Jonghun
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.163-177
    • /
    • 2019
  • As smartphones are getting widely used, human activity recognition (HAR) tasks for recognizing personal activities of smartphone users with multimodal data have been actively studied recently. The research area is expanding from the recognition of the simple body movement of an individual user to the recognition of low-level behavior and high-level behavior. However, HAR tasks for recognizing interaction behavior with other people, such as whether the user is accompanying or communicating with someone else, have gotten less attention so far. And previous research for recognizing interaction behavior has usually depended on audio, Bluetooth, and Wi-Fi sensors, which are vulnerable to privacy issues and require much time to collect enough data. Whereas physical sensors including accelerometer, magnetic field and gyroscope sensors are less vulnerable to privacy issues and can collect a large amount of data within a short time. In this paper, a method for detecting accompanying status based on deep learning model by only using multimodal physical sensor data, such as an accelerometer, magnetic field and gyroscope, was proposed. The accompanying status was defined as a redefinition of a part of the user interaction behavior, including whether the user is accompanying with an acquaintance at a close distance and the user is actively communicating with the acquaintance. A framework based on convolutional neural networks (CNN) and long short-term memory (LSTM) recurrent networks for classifying accompanying and conversation was proposed. First, a data preprocessing method which consists of time synchronization of multimodal data from different physical sensors, data normalization and sequence data generation was introduced. We applied the nearest interpolation to synchronize the time of collected data from different sensors. Normalization was performed for each x, y, z axis value of the sensor data, and the sequence data was generated according to the sliding window method. Then, the sequence data became the input for CNN, where feature maps representing local dependencies of the original sequence are extracted. The CNN consisted of 3 convolutional layers and did not have a pooling layer to maintain the temporal information of the sequence data. Next, LSTM recurrent networks received the feature maps, learned long-term dependencies from them and extracted features. The LSTM recurrent networks consisted of two layers, each with 128 cells. Finally, the extracted features were used for classification by softmax classifier. The loss function of the model was cross entropy function and the weights of the model were randomly initialized on a normal distribution with an average of 0 and a standard deviation of 0.1. The model was trained using adaptive moment estimation (ADAM) optimization algorithm and the mini batch size was set to 128. We applied dropout to input values of the LSTM recurrent networks to prevent overfitting. The initial learning rate was set to 0.001, and it decreased exponentially by 0.99 at the end of each epoch training. An Android smartphone application was developed and released to collect data. We collected smartphone data for a total of 18 subjects. Using the data, the model classified accompanying and conversation by 98.74% and 98.83% accuracy each. Both the F1 score and accuracy of the model were higher than the F1 score and accuracy of the majority vote classifier, support vector machine, and deep recurrent neural network. In the future research, we will focus on more rigorous multimodal sensor data synchronization methods that minimize the time stamp differences. In addition, we will further study transfer learning method that enables transfer of trained models tailored to the training data to the evaluation data that follows a different distribution. It is expected that a model capable of exhibiting robust recognition performance against changes in data that is not considered in the model learning stage will be obtained.

Quality Characteristics of Kiwi Wine and Optimum Malolactic Fermentation Conditions (참다래 와인의 최적 malolactic fermentation 조건과 품질 특성)

  • Kang, Sang-Dong;Ko, Yu-Jin;Kim, Eun-Jung;Son, Yong-Hwi;Kim, Jin-Yong;Seol, Hui-Gyeong;Kim, Ig-Jo;Cho, Hyoun-Kook;Ryu, Chung-Ho
    • Journal of Life Science
    • /
    • v.21 no.4
    • /
    • pp.509-514
    • /
    • 2011
  • Maloactic fermentation (MLF) occurs after completion of alcoholic fermentation and is mediated by lactic acid bacteria (LAB), mainly Oenococcus oeni. Kiwi wine more than commercial grape wine has the problem of high acidity. Therefore, we investigated the optimal MLF conditions for regulating strong acidity and improving the quality properties of wine fermented with Kiwi fruit cultivated in Korea. For alcohol fermentation, industrial wine yeast Saccharomyces cerevisiae KCCM 12650 strains and LAB, known as MLF strains, were used to alleviate wine acidity. First, the various experimental conditions of Kiwi fruit, initial pH (2.5, 3.5, 4.5), fermenting temperature (20, 25, $30^{\circ}C$), and sugar contents (24 $^{\circ}Brix$), were adjusted, and after the fermentation period, we measured the acidity, pH, and the change in organic acid content by the AOAC method and HPLC analysis. The alcohol content of fermented Kiwi wine was 12.75%. Further, total acidity and pH of Kiwi wine were 0.78% and 3.5, respectively. Total sugar and total polyphenol contents of Kiwi wine were 38.72 mg/ml and 60.18 mg/ml, respectively. With regard to organic acid content, the control contained 0.63 mg/ml of oxalic acid, 2.99 mg/ml of malic acid, and 0.71 mg/ml of lactic acid, whereas MLF wine contained 0.69 mg/ml of oxalic acid, 0.06 mg/ml of malic acid, and 3.12 mg/ml of lactic acid. Kiwi wine had lower malic acid values and total acidity than control after MLF processing. In MLF, the optimum initial pH value and fermentation temperature were 3.5 and $25^{\circ}C$, respectively. Therefore, these studies suggest that establishment of optimal MLF conditions could improve the properties of Kiwi wine manufactured in Korea.

Performance Analysis of Frequent Pattern Mining with Multiple Minimum Supports (다중 최소 임계치 기반 빈발 패턴 마이닝의 성능분석)

  • Ryang, Heungmo;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.1-8
    • /
    • 2013
  • Data mining techniques are used to find important and meaningful information from huge databases, and pattern mining is one of the significant data mining techniques. Pattern mining is a method of discovering useful patterns from the huge databases. Frequent pattern mining which is one of the pattern mining extracts patterns having higher frequencies than a minimum support threshold from databases, and the patterns are called frequent patterns. Traditional frequent pattern mining is based on a single minimum support threshold for the whole database to perform mining frequent patterns. This single support model implicitly supposes that all of the items in the database have the same nature. In real world applications, however, each item in databases can have relative characteristics, and thus an appropriate pattern mining technique which reflects the characteristics is required. In the framework of frequent pattern mining, where the natures of items are not considered, it needs to set the single minimum support threshold to a too low value for mining patterns containing rare items. It leads to too many patterns including meaningless items though. In contrast, we cannot mine any pattern if a too high threshold is used. This dilemma is called the rare item problem. To solve this problem, the initial researches proposed approximate approaches which split data into several groups according to item frequencies or group related rare items. However, these methods cannot find all of the frequent patterns including rare frequent patterns due to being based on approximate techniques. Hence, pattern mining model with multiple minimum supports is proposed in order to solve the rare item problem. In the model, each item has a corresponding minimum support threshold, called MIS (Minimum Item Support), and it is calculated based on item frequencies in databases. The multiple minimum supports model finds all of the rare frequent patterns without generating meaningless patterns and losing significant patterns by applying the MIS. Meanwhile, candidate patterns are extracted during a process of mining frequent patterns, and the only single minimum support is compared with frequencies of the candidate patterns in the single minimum support model. Therefore, the characteristics of items consist of the candidate patterns are not reflected. In addition, the rare item problem occurs in the model. In order to address this issue in the multiple minimum supports model, the minimum MIS value among all of the values of items in a candidate pattern is used as a minimum support threshold with respect to the candidate pattern for considering its characteristics. For efficiently mining frequent patterns including rare frequent patterns by adopting the above concept, tree based algorithms of the multiple minimum supports model sort items in a tree according to MIS descending order in contrast to those of the single minimum support model, where the items are ordered in frequency descending order. In this paper, we study the characteristics of the frequent pattern mining based on multiple minimum supports and conduct performance evaluation with a general frequent pattern mining algorithm in terms of runtime, memory usage, and scalability. Experimental results show that the multiple minimum supports based algorithm outperforms the single minimum support based one and demands more memory usage for MIS information. Moreover, the compared algorithms have a good scalability in the results.

A Method to Quantify Breast MRI for Predicting Tumor Invasion in Patients with Preoperative Biopsy- Proven Ductal Carcinoma in Situ (DCIS) (유방 자기공명영상법을 이용한 수술 전 관상피내암으로 진단된 환자의 침윤성 유방암을 예측하는 정량적 분석법)

  • Ko, Myung-Su;Kim, Sung Hun;Kang, Bong Joo;Choi, Byung Gil;Song, Byung Joo;Cha, Eun Suk;Kiraly, Atilla Peter;Kim, In Seong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.17 no.2
    • /
    • pp.73-82
    • /
    • 2013
  • Purpose : To determine the quantitative parameters of breast MRI that predict tumor invasion in biopsy-proven DCIS. Materials and Methods: From January 2009 to March 2010, 42 MRI examinations of 41 patients with biopsy-proven DCIS were included. The quantitative parameters, which include the initial percentage enhancement ($E_1$), peak percentage enhancement ($E_{peak}$), time to peak enhancement (TTP), signal enhancement ratio (SER), arterial enhancement fraction (AEF), apparent diffusion coefficient (ADC) value, long diameter and the volume of the lesion, were calculated as parameters that might predict invasion. Univariate and multivariate analyses were used to identify the parameters associated with invasion. Results: Out of 42 lesions, 23 lesions were confirmed to be invasive ductal carcinoma (IDC) and 19 lesions were confirmed to be pure DCIS. Tumor size (p = 0.003; $6.5{\pm}3.2$ cm vs. $3.6{\pm}2.6$ cm, respectively) and SER (p = 0.036; $1.1{\pm}0.3$ vs. $0.9{\pm}0.3$, respectively) showed statistically significant high in IDC. In contrast, E1, Epeak, TTP, ADC, AEF and volume of the lesion were not statistically significant. Tumor size and SER had statistically significant associations with invasion, with an odds ratio of 1.04 and 22.93, respectively. Conclusion: Of quantitative parameters analyzed, SER and the long diameter of the lesion could be specific parameter for predicting invasion in the biopsy-proven DCIS.

Effect of bronchial artery embolization in the management of massive hemoptysis : factors influencing rebleeding (대량객혈 환자에서 기관지 동맥색전술의 효과 : 색전술후 재발의 원인과 예측인자)

  • Kim, Byeong Cheol;Kim, Jeong Mee;Kim, Yeon Soo;Kim, Seong Min;Choi, Wan Young;Lee, Kyeong Sang;Yang, Suck Cheol;Yoon, Ho Joo;Shin, Dong Ho;Park, Sung Soo;Lee, Jung Hee;Kim, Chang Soo;Seo, Heung Suk
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.4
    • /
    • pp.590-599
    • /
    • 1996
  • Background : Bronchial artery embolization has been established as an effective means to control hemoptysis, especially in patients with decreased pulmonary function and those with advanced chronic obstructive pulmonary disease. We evaluated the effect of arterial embolization in immediate control of massive hemoptysis and investigated the clinical and angiographic characteristics and the course of patients with reccurrent hemoptysis after initial succeseful embolization. Another purpose of this study was to find predictive that cause rebleeding after bronchial artery embolization. Method : We reviewed 47 cases that underwent bronchial artery embolization for the management of massive hemoptysis, retrospectively. We analyzed angiographic findings in all cases before bronchial artery embolization and also reviewed the angiographic findings of patients that underwent additional bronchial artery embolization for the control of reccurrent hemoptysis to find the clauses of rebleeding. Results : 1) Underlying causes of hemoptysis were pulmonary tuberculosis(n=35), bronchiectasis(n=5), aspergilloma(n=2), lung cancer(n=2), pulmonary A-V malformation(n=1), and unknown cases(n=2). 2) Overal immediate success rate was 94%(n=44), an6 recurrence rate was 40%(n=19). 3) The prognostic factors such as bilaterality, systemic-pulmonary artery shunt, multiple feeding arteries and degree of neovascularity were not statistically correlated with rebleeding tendency (p value>0.05). 4) At additional bronchial artery embolization, Revealed recannalization of previous embolized arteries were 14/18cases(78%) and the presence of new deeding arteries was 8/18cases(44%). 5) The complications(31cases, 66%) such as fever, chest pain, cough, voiding difficulty, paralytic ileus, motor and sensory change of lower extremity, atelectasis and splenic infarction were occured. Conclusion : Recannalization of previous embolized arteries is the major cause of recurrence after bronchial artery embolization. Despite high recurrence rate of hemoptysis, bronchial artery embolization for management of massive hemoptysis is a effective and saute procedure in immediate bleeding control.

  • PDF

Bankruptcy prediction using an improved bagging ensemble (개선된 배깅 앙상블을 활용한 기업부도예측)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.121-139
    • /
    • 2014
  • Predicting corporate failure has been an important topic in accounting and finance. The costs associated with bankruptcy are high, so the accuracy of bankruptcy prediction is greatly important for financial institutions. Lots of researchers have dealt with the topic associated with bankruptcy prediction in the past three decades. The current research attempts to use ensemble models for improving the performance of bankruptcy prediction. Ensemble classification is to combine individually trained classifiers in order to gain more accurate prediction than individual models. Ensemble techniques are shown to be very useful for improving the generalization ability of the classifier. Bagging is the most commonly used methods for constructing ensemble classifiers. In bagging, the different training data subsets are randomly drawn with replacement from the original training dataset. Base classifiers are trained on the different bootstrap samples. Instance selection is to select critical instances while deleting and removing irrelevant and harmful instances from the original set. Instance selection and bagging are quite well known in data mining. However, few studies have dealt with the integration of instance selection and bagging. This study proposes an improved bagging ensemble based on instance selection using genetic algorithms (GA) for improving the performance of SVM. GA is an efficient optimization procedure based on the theory of natural selection and evolution. GA uses the idea of survival of the fittest by progressively accepting better solutions to the problems. GA searches by maintaining a population of solutions from which better solutions are created rather than making incremental changes to a single solution to the problem. The initial solution population is generated randomly and evolves into the next generation by genetic operators such as selection, crossover and mutation. The solutions coded by strings are evaluated by the fitness function. The proposed model consists of two phases: GA based Instance Selection and Instance based Bagging. In the first phase, GA is used to select optimal instance subset that is used as input data of bagging model. In this study, the chromosome is encoded as a form of binary string for the instance subset. In this phase, the population size was set to 100 while maximum number of generations was set to 150. We set the crossover rate and mutation rate to 0.7 and 0.1 respectively. We used the prediction accuracy of model as the fitness function of GA. SVM model is trained on training data set using the selected instance subset. The prediction accuracy of SVM model over test data set is used as fitness value in order to avoid overfitting. In the second phase, we used the optimal instance subset selected in the first phase as input data of bagging model. We used SVM model as base classifier for bagging ensemble. The majority voting scheme was used as a combining method in this study. This study applies the proposed model to the bankruptcy prediction problem using a real data set from Korean companies. The research data used in this study contains 1832 externally non-audited firms which filed for bankruptcy (916 cases) and non-bankruptcy (916 cases). Financial ratios categorized as stability, profitability, growth, activity and cash flow were investigated through literature review and basic statistical methods and we selected 8 financial ratios as the final input variables. We separated the whole data into three subsets as training, test and validation data set. In this study, we compared the proposed model with several comparative models including the simple individual SVM model, the simple bagging model and the instance selection based SVM model. The McNemar tests were used to examine whether the proposed model significantly outperforms the other models. The experimental results show that the proposed model outperforms the other models.

Two Dimensional Size Effect on the Compressive Strength of Composite Plates Considering Influence of an Anti-buckling Device (좌굴방지장치 영향을 고려한 복합재 적층판의 압축강도에 대한 이차원 크기 효과)

  • ;;C. Soutis
    • Composites Research
    • /
    • v.15 no.4
    • /
    • pp.23-31
    • /
    • 2002
  • The two dimensional size effect of specimen gauge section ($length{\;}{\times}{\;}width$) was investigated on the compressive behavior of a T300/924 $\textrm{[}45/-45/0/90\textrm{]}_{3s}$, carbon fiber-epoxy laminate. A modified ICSTM compression test fixture was used together with an anti-buckling device to test 3mm thick specimens with a $30mm{\;}{\times}{\;}30mm,{\;}50mm{\;}{\times}{\;}50mm,{\;}70mm{\;}{\times}{\;}70mm{\;}and{\;}90mm{\;}{\times}{\;}90mm$ gauge length by width section. In all cases failure was sudden and occurred mainly within the gauge length. Post failure examination suggests that $0^{\circ}$ fiber microbuckling is the critical damage mechanism that causes final failure. This is the matrix dominated failure mode and its triggering depends very much on initial fiber waviness. It is suggested that manufacturing process and quality may play a significant role in determining the compressive strength. When the anti-buckling device was used on specimens, it was showed that the compressive strength with the device was slightly greater than that without the device due to surface friction between the specimen and the device by pretoque in bolts of the device. In the analysis result on influence of the anti-buckling device using the finite element method, it was found that the compressive strength with the anti-buckling device by loaded bolts was about 7% higher than actual compressive strength. Additionally, compressive tests on specimen with an open hole were performed. The local stress concentration arising from the hole dominates the strength of the laminate rather than the stresses in the bulk of the material. It is observed that the remote failure stress decreases with increasing hole size and specimen width but is generally well above the value one might predict from the elastic stress concentration factor. This suggests that the material is not ideally brittle and some stress relief occurs around the hole. X-ray radiography reveals that damage in the form of fiber microbuckling and delamination initiates at the edge of the hole at approximately 80% of the failure load and extends stably under increasing load before becoming unstable at a critical length of 2-3mm (depends on specimen geometry). This damage growth and failure are analysed by a linear cohesive zone model. Using the independently measured laminate parameters of unnotched compressive strength and in-plane fracture toughness the model predicts successfully the notched strength as a function of hole size and width.

ATHEROSCLEROSIS, CHOLESTEROL AND EGG - REVIEW -

  • Paik, I.K.;Blair, R.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.9 no.1
    • /
    • pp.1-25
    • /
    • 1996
  • The pathogenesis of atherosclerosis can not be summarized as a single process. Lipid infiltration hypothesis and endothelial injury hypothesis have been proposed and investigated. Recent developments show that there are many points of potential interactions between them and that they can actually be regarded as two phases of a single, unifying hypothesis. Among the many risk factors of atherosclerosis, plasma homocysteine and lipoprotein(a) draw a considerable interest because they are independent indicators of atherogenicity. Triglyceride (TG)-rich lipoproteins (chylomicron and VLDL) are not considered to be atherogenic but they are related to the metabolism of HDL cholesterol and indirectly related to coronary heart disease (CHD). LDL can of itself be atherogenic but the oxidative products of this lipoprotein are more detrimental. HDL cholesterol has been considered to be a favorable cholesterol. The so-called 'causalist view' claims that HDL traps excess cholesterol from cellular membranes and transfers it to TG-rich lipoproteins that are subsequently removed by hepatic receptors. In the so-called 'noncausalist view', HDL does not interfere directly with cholesterol deposition in the arterial wall but instead reflects he metabolism of TG-rich lipoproteins and their conversion to atherogenic remnants. Approximately 70-80% of the human population shows an effective feedback control mechanism in cholesterol homeostasis. Type of dietary fat has a significant effect on the lipoprotein cholesterol metabolism and atherosclerosis. Generally, saturated fatty acids elevate and PUFA lower serum cholesterol, whereas MUFA have no specific effect. EPA and DHA inhibit the synthesis of TG, VLDL and LDL, and may have favourable effects on some of the risk factors. Phospholipids, particularly lecithin, have an antiatherosclerotic effect. Essential phospholipids (EPL) may enhance the formation of polyunsaturated cholesteryl ester (CE) which is less sclerotic and more easily dispersed via enhanced hydrolysis of CE in the arterial wall. Also, neutral fecal steroid elimination may be enhanced and cholesterol absorption reduced following EPL treatment. Antioxidants protect lipoproteins from oxidation, and cells from the injury of toxic, oxidized LDL. The rationale for lowering of serum cholesterol is the strong association between elevation of plasma or serum cholesterol and CHD. Cholesterol-lowing, especially LDL cholesterol, to the target level could be achieved using diet and combination of drug therapy. Information on the link between cholesterol and CHD has decreased egg consumption by 16-25%. Some clinical studies have indicated that dietary cholesterol and egg have a significant hypercholesterolemic effect, while others have indicated no effect. These studies differed in the use of purified cholesterol or cholesterol in eggs, in the range of baseline and challenge cholesterol levels, in the quality and quantity of concomitant dietary fat, in the study population demographics and initial serum cholesterol levels, and clinical settings. Cholesterol content of eggs varies to a certain extent depending on the age, breed and diet of hens. However, egg yolk cholesterol level is very resistant to change because of the particular mechanism involved in yolk formation. Egg yolk contains a factor of factors responsible for accelerated cholesterol metabolism and excretion compared with crystalline cholesterol. One of these factors could be egg lecithin. Egg lecithin may not be as effective as soybean lecithin in lowering serum cholesterol level due probably to the differences of fatty acid composition. However, egg lecithin may have positive effects in hypercholesterolemia by increasing serum HDL level and excretion of fecal cholesterol. The association of serum cholesterol with egg consumption has been widely studied. When the basal or control diet contained little or no cholesterol, consumption of 1 or 2 eggs daily increased the concentration of plasma cholesterol, whereas that of the normolipemic persons on a normal diet was not significantly influenced by consuming 2 to 3 eggs daily. At higher levels of egg consumption, the concentration of HDL tends to increase as well as LDL. There exist hyper-and hypo-responders to dietary (egg) cholesterol. Identifying individuals in both categories would be useful from the point of view of nutrition guidelines. Dietary modification of fatty acid composition has been pursued as a viable method of modifying fat composition of eggs and adding value to eggs. In many cases beneficial effects of PUFA enriched eggs have been demonstrated. Generally, consumption of n-3 fatty acids enriched eggs lowered the concentration of plasma TG and total cholesterol compared to the consumption of regular eggs. Due to the highly oxidative nature of PUFA, stability of this fat is essential. The implication of hepatic lipid accumulation which was observed in hens fed on fish oils should be explored. Nutritional manipulations, such as supplementation with iodine, inhibitors of cholesterol biosynthesis, garlic products, amino acids and high fibre ingredients, have met a limited success in lowering egg cholesterol.

Pulmonary Valve Replacement with Tissue Valves After Pulmonary Outflow Tract Repair in Children (소아에서 폐동맥유출로 재건 후 시행한 조직판막을 이용한 폐동맥판 대치술)

  • Lee, Jeong-Ryul;Hwang, Ho-Young;Chang, Ji-Min;Lee, Cheul;Choi, Jae-Sung;Kim, Yong-Jin;Rho, Joon-Ryang;Bae, Eun-Jung
    • Journal of Chest Surgery
    • /
    • v.35 no.5
    • /
    • pp.350-355
    • /
    • 2002
  • Background: Most of pulmonary regurgitation with or without stenosis appears to be well tolerated early after the repair of pulmonary outflow tract. However, it may result in symptomatic right ventricular dilatation, dysfunction and arrhythmias over a long period of time. We studied the early outcome of pulmonary valve replacement with tissue valves for patients with the above clinical features. Material and Method: Sixteen consecutive patients who underwent pulmonary valve replacement from September 1999 to February 2002 were reviewed(9 males and 7 females). The initial diagnoses included tetralogy of Fallot(n=11), and other congenital heart anomalies with pulmonary outflow obstruction(n=5). Carpentier-Edwards PERIMOUNT Pericardial Bioprostheses and Hancock porcine valves were used. The posterior two thirds of the bioprosthetic rim was placed on the native pulmonary valve annulus and the anterior one third was covered with a bovine pericardial patch. Preoperative pulmonary regurgitation was greater than moderate degree in 13 patients. Three patients had severe pulmonary stenosis. Tricuspid regurgitation was present in 12 patients. Result: Follow-up was complete with a mean duration of 15.8 $\pm$ 8.5months. There was no operative mortality. Cardiothoracic ratio was decreased from 66.0 $\pm$ 6.5% to 57.6 $\pm$ 4.5%(n=16, p=0.001). All patients remained in NYHA class I at the most recent follow-up (n=16, p=0.016). Pulmonary regurgitation was mild or absent in all patients. Tricuspid regurgitation was less than trivial in all patients. Conclusion: In this study we demonstrated that early pulmonary valve replacement for the residual pulmonary regurgitation with or without right ventricular dysfunction was a reasonal option. This technique led to reduce the heart size, decrease pulmonary regurgitation and tricuspid regurgitation as well as to improve the patients'functional status. However, a long term outcome should be cautiously investigated.

Improvement of Fontan Circulatory Failure after Conversion to Total Cavopulmonary Connection (완전 대정맥-폐동맥 연결수술로 전환 후의 폰탄순환장애 개선)

  • Han Ki Park;Gijong Yi;Suk Won Song;Sak Lee;Bum Koo Cho;Young hwan Park
    • Journal of Chest Surgery
    • /
    • v.36 no.8
    • /
    • pp.559-565
    • /
    • 2003
  • By improving the flow pattern in Fontan circuit, total cavopulmonary connection (TCPC) could result in a better outcome than atriopulmonary connection Fontan operation. For the patients with impaired hemodynamics after atriopulmonary Fontan connection, conversion to TCPC can be expected to bring hemodynamic and functional improvement. We studied the results of the revision of the previous Fontan connection to TCPC in patients with failed Fontan circulation. Material and method: From October1979 to June 2002, eight patients who had failed Fontan circulation, underwent revision of previous Fontan operation to TCPC at Yonsei University Hospital. Intracardiac anomalies of the patients were tricuspid atresia (n=4) and other functional single ventricles (n=4). Mean age at TCPC conversion was 14.0$\pm$7.0 years (range, 4.6~26.2 years) and median interval between initial Fontan operation and TCPC was 7.5 years (range, 2.4~14.3 years). All patients had various degree of symptoms and signs of right heart failure. NYHA functional class was 111 or IV in six patients. Paroxysmal atrial fibrillation (n:f), cyanosis (n=2), intraatrial thrombi (n=2), and protein losing enteropathy (PLE) (n=3) were also combined. The previous Fontan operation was revised to extracardiac conduit placement (n=7) and intraatrial lateral tunnel (n=1). Result: There was no operative death. Major morbidities included deep sternal infection (n=1), prolonged pleural effusion over two weeks (n=1), and temporary junctional lachyarrhythrnia (n=1). Postoperative central venous Pressure was lower than the preoperative value (17.9$\pm$3.5 vs. 14.9$\pm$1.0, p=0.049). Follow-up was complete in all patients and extended to 50,1 months (mean, 30.3$\pm$ 12.8 months). There was no late death. All patients were in NYHA class 1 or 11. Paroxysmal supraventricular tachycardia developed in a patient who underwent conversion to intraatrial lateral tunnel procedure, PLE was recurred in two patients among three patients who had had PLE before the convertsion. There was no newly developed PLE. Conclusion: Hemodynamic and functional improvement could be expected for the patients with Fontan circulatory failure after atriopulmonary connection by revision of their previous circulation to TCPC. The conversion could be performed with low risk of morbidity and mortality.