• Title/Summary/Keyword: Two output

Search Result 4,274, Processing Time 0.029 seconds

Mid-Term Results of 292 cases of Coronary Artery Bypass Grafting (관상동맥 우회술 292례의 중기 성적)

  • 김태윤;김응중;이원용;지현근;신윤철;김건일
    • Journal of Chest Surgery
    • /
    • v.35 no.9
    • /
    • pp.643-652
    • /
    • 2002
  • As the prevalence of coronay artery disease is increasing, the surgical treatment has been universalized and operative outcome has been improved. We analyzed the short and mid-term results of 292 CABGs performed in Kangdong Sacred Heart Hospital. Material and Method: From June 1994 to December 2001, 292 patients underwent coronary artery bypass grafting. There were 173 men and 119 women and their ages ranged from 39 to 84 years with a mean of $61.8{\pm}9.1$ years. We analyzed the preoperative risk factors, operative procedures and operative outcome. In addition, we analyzed the recurrence of symptoms, long-term mortality and complications via out-patient follow-up for discharged patients. Result: Preoperative clinical diagnoses were unstable angina in 137(46.9%), stable angina in 34(11.6%), acute myocardial infarction in 40(13.7%), non-Q myocardial infarction in 25(8.6%), postinfarction angina in 22(7.5%), cardiogenic shock in 30(10.3%) and PTCA failure in 4(1.4%) patients. Preoperative angiographic diagnoses were three-vessel disease in 157(53.8%), two-vessel disease in 35 (12.0%), one-vessel disease in 11(3.8%) and left main disease in 89(30.5%) patients. We used saphenous veins in 630, internal thoracic arteries in 257, radial arteries in 50, and right gastoepiploic arteries in 2 distal anastomoses. The mean number of distal anastomoses per patient was $3.2{\pm}1.0$ There were 18 concomitant procedures ; valve replacement in 8(2.7%), left main coronary artery angioplasty in 6(2.1%), patch closure of postinfarction ventricular septal defect(PMI-VSD) in 2(0.7%), replacement of ascending aorta in 1(0.3%) and coronary endarterectomy in 1(0.3%) patient. The mean ACC time was $96.6{\pm}35.3 $ minutes and the mean CPB time was $179.2{\pm}94.6$ minutes. Total early mortality was 8.6%, but it was 3.1% in elective operations. The most common cause of early mortality was low cardiac output syndrome in 6(2.1%) patients. The stastistically significant risk factors for early mortality were hypertension, old age($\geq$ 70 years), poor LV function(EF<40%), congestive heart failure, preoperative intraaortic balloon pump, emergency operation and chronic renal failure. The most common complication was arrhythmia in 52(17.8%) patients. The mean follow-up period was $39.0{\pm}27.0$ months. Most patients were free of symptoms during follow-up. Fourteen patients(5.8 %) had recurrent symptoms and 7 patients(2.9%) died during follow-up period. Follow-up coronary angiography was performed in 13 patients with recurrent symptoms and they were managed by surgical and medical treatment according to the coronary angiographic result. Conclusion: The operative and late results of CABG in our hospital, was acceptable. However, There should be more refinement in operative technique and postoperative management to improve the results.

Clinical Results and Optimal Timing of OPCAB in Patients with Acute Myocardial Infarction (급성 심근경색증 환자에서 시행한 OPCAB의 수술시기와 검색의 정도에 따른 임상성적)

  • Youn Young-Nam;Yang Hong-Suk;Shim Yeon-Hee;Yoo Kyung-Jong
    • Journal of Chest Surgery
    • /
    • v.39 no.7 s.264
    • /
    • pp.534-543
    • /
    • 2006
  • Background: There are a lot of debates regarding the optimal timing of operation of acute myocardial infarction (AMI). Off pump coronary artery bypass grafting (OPCAB) has benefits by avoiding the adverse effects of the cardio-pulmonary bypass, but its efficacy in AMI has not been confirmed yet. The purpose of this study is to evaluate retrospectively early and mid-term results of OPCAB in patients with AMI according to transmurality and timing of operation. Material and Method: Data were collected in 126 AMI patients who underwent OPCAB between January 2002 and July 2005, Mean age of patients were 61.2 years. Male was 92 (73.0%) and female was 34 (27.2%). 106 patients (85.7%) had 3 vessel coronary artery disease or left main disease. Urgent or emergent operations were performed in 25 patients (19.8%). 72 patients (57.1%) had non-transmural myocardial infarction (group 1) and 52 patients (42.9%) had transmural myocardial infarction (group 2). The incidence of cardiogenic shock and insertion of intra-aortic balloon pump (IABP) was higher in group 2. The time between occurrence of AMI and operation was divided in 4 subgroups (<1 day, $1{\sim}3\;days,\;4{\sim}7\;days$, >8 days). OPCAB was performed a mean of $5.3{\pm}7.1$ days after AMI in total, which was $4.2{\pm}5.9$ days in group 1, and $6,6{\pm}8.3$ days in group 2. Result: Mean distal an-astomoses were 3.21 and postoperative IABP was inserted in 3 patients. There was 1 perioperative death in group 1 due to low cardiac output syndrome, but no perioperative new MI occurred in this study. There was no difference in postoperative major complication between two groups and according to the timing of operation. Mean follow-up time was 21.3 months ($4{\sim}42$ months). The 42 months actuarial survival rate was $94.9{\pm}2.4%$, which was $91.4{\pm}4.7%$ in group 1 and $98.0{\pm}2.0%$ in group 2 (p=0.26). The 42 months freedom rate from cardiac death was $97.6{\pm}1.4%$ which was $97.0{\pm}2.0%$ in group 1 and $98.0{\pm}2.0%$ in group 2 (p=0.74). The 42 months freedom rate from cardiac event was $95.4{\pm}2.0%$ which was $94.8{\pm}2.9%$ in group 1 and $95.9{\pm}2.9%$ in group 2 (p=0.89). Conclusion: OPCAB in AMI not only reduces morbidity but also favors hospital outcomes irrespective of timing of operation. The transmurality of myocardial infarction did not affect the surgical and midterm outcomes of OPCAB. Therefore, there may be no need to delay the surgical off-pump revascularization of the patients with AMI if surgical revascularization is indicated.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.