• Title/Summary/Keyword: Big Five

Search Result 506, Processing Time 0.033 seconds

Various Cultural Factors Associated with Disease Development of Garlic White Rot Caused by Two Species of Sclerotium (마늘 흑색썩음균핵병 발생에 관여하는 여러가지 경종적 요인)

  • Kim, Yong-Ki;Kwon, Mi-Kyung;Shim, Hong-Sik;Kim, Tack-Soo;Yeh, Wan-Hae;Cho, Weon-Dae;Choi, In-Hu;Lee, Seong-Chan;Ko, Sug-Ju;Lee, Yong-Hwan;Lee, Chan-Jung
    • Research in Plant Disease
    • /
    • v.11 no.1
    • /
    • pp.28-34
    • /
    • 2005
  • This study was conducted to investigate the control possibility of garlic white rot causing severe yield losses of Allium species and cultivars using cultural practices such as optimal sowing date and burial depth, and lime application. Inoculum density in infested field soil was investigated at different soil depth, and that on the diseased plant debris was done. Inoculum density and recovery ratio of white rot pathogen of garlic was highly different between two species of Sclerotium cepivorum forming comparatively small sclerotia and Sclerotium sp. forming comparatively large ones. It was confirmed that S. cepivorum formed more sclerotia on bulbs of garlic than S. sp., and sclerotial recovery of S. cepivorum was higher than that of S. sp. Inoculum density of white rot pathogen in the infested field at garlic seeding period ranged from one to thirteen sclerotia per 30 g soil. Inoculum density of white rot pathogen decreased remarkably with increasing soil depth and above 95% of sclerotia were distributed within 5 cm of soil depth. Disease severity of white rot was higher on slightly planted garlics than deeply-planted ones. Garlic seed bulbs infected by white rot pathogens were confirmed to be one of main inoculum sources of white rot in the field and the disease incidences caused by garlic seed transmission showed big differences among garlic varieties. When nine garlic varieties harvested from infested plots were sown in the field, highly susceptible varieties, ‘Wando’, ‘Daeseo’, ‘Namdo’ and ‘Kodang’ showed high disease incidences, whereas other five varieties were not infected at all. It was confirmed that white rot occurred higher on early-sown garlics, before middle October, than on late-sown ones, after late October. Meanwhile, increasing application rate of lime ranged from 100 to 300 g reduced disease severity of white rot.

The Present Situation and Challenges of the Russian Music Industry: Centered on the Digital Sound Sources (러시아 음악 산업 현황과 과제 - 디지털 음원을 중심으로 -)

  • Kwon, ki-bae;Kim, Se-il
    • Cross-Cultural Studies
    • /
    • v.50
    • /
    • pp.395-424
    • /
    • 2018
  • The purpose of this paper is to examine the current situation and background of the Russian consumer music market, where digital music sources are making great strides in the noted recent years. In addition, music storage technology, media and change are considered together in this report. Moreover, Russia is the 12th largest music market in the world. The Russian music industry is following the recent trend of the global music industry, where the digital music market is growing rapidly on many different levels. The explosive growth of the digital sound sources in Russia's music industry is attributed to the explosive increase in available consumer downloads, streaming sound source service, and the increase in the number of digital sound sources using mobile technologies due to the development of the Internet. In particular, the sales of the available and accessible streaming sound sources are expected to grow explosively by the year 2020, which is expected to account for more than 85% of total digital music sales. In other words, the spread of smartphones and the resulting changes in the lifestyle of the Russians have created these changes for the global consumer of music. In other words, the time has come for anyone to easily access music and listen to music without a separate audio or digital player. And the fact that the Russian government's strong policy on the eradication of illegal copying of music is becoming an effective deterrent, as is also the factor that led to the increase of the share of the digital sound source to increase sales in Russia. Today, the Russian music industry is leading this change through the age and process of simply adapting to the digital age. Music is the most important element of cultural assets, and it is the beneficial content, which drives the overall growth of the digital economy. In addition, if the following five improvements(First, strengthen the consciousness of the Russian people about copyright protection; Second, utilizing the Big Data Internet resources in the digital music industry; Third, to improve the monopoly situation of digital music distributors; Fourth, distribution of fair music revenues; and Fifth, revitalization of a re-investment in the current Russian music industry) are effective and productive, Russia's role and position in the world music market is likely to expand.

Development of the Regulatory Impact Analysis Framework for the Convergence Industry: Case Study on Regulatory Issues by Emerging Industry (융합산업 규제영향분석 프레임워크 개발: 신산업 분야별 규제이슈 사례 연구)

  • Song, Hye-Lim;Seo, Bong-Goon;Cho, Sung-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.199-230
    • /
    • 2021
  • Innovative new products and services are being launched through the convergence between heterogeneous industries, and social interest and investment in convergence industries such as AI, big data-based future cars, and robots are continuously increasing. However, in the process of commercialization of convergence new products and services, there are many cases where they do not conform to the existing regulatory and legal system, which causes many difficulties in companies launching their products and services into the market. In response to these industrial changes, the current government is promoting the improvement of existing regulatory mechanisms applied to the relevant industry along with the expansion of investment in new industries. This study, in these convergence industry trends, aimed to analysis the existing regulatory system that is an obstacle to market entry of innovative new products and services in order to preemptively predict regulatory issues that will arise in emerging industries. In addition, it was intended to establish a regulatory impact analysis system to evaluate adequacy and prepare improvement measures. The flow of this study is divided into three parts. In the first part, previous studies on regulatory impact analysis and evaluation systems are investigated. This was used as basic data for the development direction of the regulatory impact framework, indicators and items. In the second regulatory impact analysis framework development part, indicators and items are developed based on the previously investigated data, and these are applied to each stage of the framework. In the last part, a case study was presented to solve the regulatory issues faced by actual companies by applying the developed regulatory impact analysis framework. The case study included the autonomous/electric vehicle industry and the Internet of Things (IoT) industry, because it is one of the emerging industries that the Korean government is most interested in recently, and is judged to be most relevant to the realization of an intelligent information society. Specifically, the regulatory impact analysis framework proposed in this study consists of a total of five steps. The first step is to identify the industrial size of the target products and services, related policies, and regulatory issues. In the second stage, regulatory issues are discovered through review of regulatory improvement items for each stage of commercialization (planning, production, commercialization). In the next step, factors related to regulatory compliance costs are derived and costs incurred for existing regulatory compliance are calculated. In the fourth stage, an alternative is prepared by gathering opinions of the relevant industry and experts in the field, and the necessity, validity, and adequacy of the alternative are reviewed. Finally, in the final stage, the adopted alternatives are formulated so that they can be applied to the legislation, and the alternatives are reviewed by legal experts. The implications of this study are summarized as follows. From a theoretical point of view, it is meaningful in that it clearly presents a series of procedures for regulatory impact analysis as a framework. Although previous studies mainly discussed the importance and necessity of regulatory impact analysis, this study presented a systematic framework in consideration of the various factors required for regulatory impact analysis suggested by prior studies. From a practical point of view, this study has significance in that it was applied to actual regulatory issues based on the regulatory impact analysis framework proposed above. The results of this study show that proposals related to regulatory issues were submitted to government departments and finally the current law was revised, suggesting that the framework proposed in this study can be an effective way to resolve regulatory issues. It is expected that the regulatory impact analysis framework proposed in this study will be a meaningful guideline for technology policy researchers and policy makers in the future.

A study on the rock mass classification in boreholes for a tunnel design using machine learning algorithms (머신러닝 기법을 활용한 터널 설계 시 시추공 내 암반분류에 관한 연구)

  • Lee, Je-Kyum;Choi, Won-Hyuk;Kim, Yangkyun;Lee, Sean Seungwon
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.469-484
    • /
    • 2021
  • Rock mass classification results have a great influence on construction schedule and budget as well as tunnel stability in tunnel design. A total of 3,526 tunnels have been constructed in Korea and the associated techniques in tunnel design and construction have been continuously developed, however, not many studies have been performed on how to assess rock mass quality and grade more accurately. Thus, numerous cases show big differences in the results according to inspectors' experience and judgement. Hence, this study aims to suggest a more reliable rock mass classification (RMR) model using machine learning algorithms, which is surging in availability, through the analyses based on various rock and rock mass information collected from boring investigations. For this, 11 learning parameters (depth, rock type, RQD, electrical resistivity, UCS, Vp, Vs, Young's modulus, unit weight, Poisson's ratio, RMR) from 13 local tunnel cases were selected, 337 learning data sets as well as 60 test data sets were prepared, and 6 machine learning algorithms (DT, SVM, ANN, PCA & ANN, RF, XGBoost) were tested for various hyperparameters for each algorithm. The results show that the mean absolute errors in RMR value from five algorithms except Decision Tree were less than 8 and a Support Vector Machine model is the best model. The applicability of the model, established through this study, was confirmed and this prediction model can be applied for more reliable rock mass classification when additional various data is continuously cumulated.

Study on Basic Elements for Smart Content through the Market Status-quo (스마트콘텐츠 현황분석을 통한 기본요소 추출)

  • Kim, Gyoung Sun;Park, Joo Young;Kim, Yi Yeon
    • Korea Science and Art Forum
    • /
    • v.21
    • /
    • pp.31-43
    • /
    • 2015
  • Information and Communications Technology (ICT) is one of the technologies which represent the core value of the creative economy. It has served as a vehicle connecting the existing industry and corporate infrastructure, developing existing products and services and creating new products and services. In addition to the ICT, new devices including big data, mobile gadgets and wearable products are gaining a great attention sending an expectation for a new market-pioneering. Further, Internet of Things (IoT) is helping solidify the ICT-based social development connecting human-to-human, human-to-things and things-to-things. This means that the manufacturing-based hardware development needs to be achieved simultaneously with software development through convergence. The essential element the convergence between hardware and software is OS, for which world's leading companies such as Google and Apple have launched an intense development recognizing the importance of software. Against this backdrop, the status-quo of the software market has been examined for the study of the present report (Korea Evaluation Institute of Industrial Technology: Professional Design Technology Development Project). As a result, the software platform-based Google's android and Apple's iOS are dominant in the global market and late comers are trying to enter the market through various pathways by releasing web-based OS and similar OS to provide a new paradigm to the market. The present study is aimed at finding the way to utilize a smart content by which anyone can be a developer based on OS responding to such as social change, newly defining a smart content to be universally utilized and analyzing the market to deal with a rapid market change. The study method, scope and details are as follows: Literature investigation, Analysis on the app market according to a smart classification system, Trend analysis on the current content market, Identification of five common trends through comparison among the universal definition of smart content, the status-quo of application represented in the app market and content market situation. In conclusion, the smart content market is independent but is expected to develop in the form of a single organic body being connected each other. Therefore, the further classification system and development focus should be made in a way to see the area from multiple perspectives including a social point of view in terms of the existing technology, culture, business and consumers.

Text Mining-Based Emerging Trend Analysis for e-Learning Contents Targeting for CEO (텍스트마이닝을 통한 최고경영자 대상 이러닝 콘텐츠 트렌드 분석)

  • Kyung-Hoon Kim;Myungsin Chae;Byungtae Lee
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.1-19
    • /
    • 2017
  • Original scripts of e-learning lectures for the CEOs of corporation S were analyzed using topic analysis, which is a text mining method. Twenty-two topics were extracted based on the keywords chosen from five-year records that ranged from 2011 to 2015. Research analysis was then conducted on various issues. Promising topics were selected through evaluation and element analysis of the members of each topic. In management and economics, members demonstrated high satisfaction and interest toward topics in marketing strategy, human resource management, and communication. Philosophy, history of war, and history demonstrated high interest and satisfaction in the field of humanities, whereas mind health showed high interest and satisfaction in the field of in lifestyle. Studies were also conducted to identify topics on the proportion of content, but these studies failed to increase member satisfaction. In the field of IT, educational content responds sensitively to change of the times, but it may not increase the interest and satisfaction of members. The present study found that content production for CEOs should draw out deep implications for value innovation through technology application instead of simply ending the technical aspect of information delivery. Previous studies classified contents superficially based on the name of content program when analyzing the status of content operation. However, text mining can derive deep content and subject classification based on the contents of unstructured data script. This approach can examine current shortages and necessary fields if the service contents of the themes are displayed by year. This study was based on data obtained from influential e-learning companies in Korea. Obtaining practical results was difficult because data were not acquired from portal sites or social networking service. The content of e-learning trends of CEOs were analyzed. Data analysis was also conducted on the intellectual interests of CEOs in each field.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.

The Pattern Analysis of Financial Distress for Non-audited Firms using Data Mining (데이터마이닝 기법을 활용한 비외감기업의 부실화 유형 분석)

  • Lee, Su Hyun;Park, Jung Min;Lee, Hyoung Yong
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.4
    • /
    • pp.111-131
    • /
    • 2015
  • There are only a handful number of research conducted on pattern analysis of corporate distress as compared with research for bankruptcy prediction. The few that exists mainly focus on audited firms because financial data collection is easier for these firms. But in reality, corporate financial distress is a far more common and critical phenomenon for non-audited firms which are mainly comprised of small and medium sized firms. The purpose of this paper is to classify non-audited firms under distress according to their financial ratio using data mining; Self-Organizing Map (SOM). SOM is a type of artificial neural network that is trained using unsupervised learning to produce a lower dimensional discretized representation of the input space of the training samples, called a map. SOM is different from other artificial neural networks as it applies competitive learning as opposed to error-correction learning such as backpropagation with gradient descent, and in the sense that it uses a neighborhood function to preserve the topological properties of the input space. It is one of the popular and successful clustering algorithm. In this study, we classify types of financial distress firms, specially, non-audited firms. In the empirical test, we collect 10 financial ratios of 100 non-audited firms under distress in 2004 for the previous two years (2002 and 2003). Using these financial ratios and the SOM algorithm, five distinct patterns were distinguished. In pattern 1, financial distress was very serious in almost all financial ratios. 12% of the firms are included in these patterns. In pattern 2, financial distress was weak in almost financial ratios. 14% of the firms are included in pattern 2. In pattern 3, growth ratio was the worst among all patterns. It is speculated that the firms of this pattern may be under distress due to severe competition in their industries. Approximately 30% of the firms fell into this group. In pattern 4, the growth ratio was higher than any other pattern but the cash ratio and profitability ratio were not at the level of the growth ratio. It is concluded that the firms of this pattern were under distress in pursuit of expanding their business. About 25% of the firms were in this pattern. Last, pattern 5 encompassed very solvent firms. Perhaps firms of this pattern were distressed due to a bad short-term strategic decision or due to problems with the enterpriser of the firms. Approximately 18% of the firms were under this pattern. This study has the academic and empirical contribution. In the perspectives of the academic contribution, non-audited companies that tend to be easily bankrupt and have the unstructured or easily manipulated financial data are classified by the data mining technology (Self-Organizing Map) rather than big sized audited firms that have the well prepared and reliable financial data. In the perspectives of the empirical one, even though the financial data of the non-audited firms are conducted to analyze, it is useful for find out the first order symptom of financial distress, which makes us to forecast the prediction of bankruptcy of the firms and to manage the early warning and alert signal. These are the academic and empirical contribution of this study. The limitation of this research is to analyze only 100 corporates due to the difficulty of collecting the financial data of the non-audited firms, which make us to be hard to proceed to the analysis by the category or size difference. Also, non-financial qualitative data is crucial for the analysis of bankruptcy. Thus, the non-financial qualitative factor is taken into account for the next study. This study sheds some light on the non-audited small and medium sized firms' distress prediction in the future.

A Study on the Critical Success Factors of Social Commerce through the Analysis of the Perception Gap between the Service Providers and the Users: Focused on Ticket Monster in Korea (서비스제공자와 사용자의 인식차이 분석을 통한 소셜커머스 핵심성공요인에 대한 연구: 한국의 티켓몬스터 중심으로)

  • Kim, Il Jung;Lee, Dae Chul;Lim, Gyoo Gun
    • Asia pacific journal of information systems
    • /
    • v.24 no.2
    • /
    • pp.211-232
    • /
    • 2014
  • Recently, there is a growing interest toward social commerce using SNS(Social Networking Service), and the size of its market is also expanding due to popularization of smart phones, tablet PCs and other smart devices. Accordingly, various studies have been attempted but it is shown that most of the previous studies have been conducted from perspectives of the users. The purpose of this study is to derive user-centered CSF(Critical Success Factor) of social commerce from the previous studies and analyze the CSF perception gap between social commerce service providers and users. The CSF perception gap between two groups shows that there is a difference between ideal images the service providers hope for and the actual image the service users have on social commerce companies. This study provides effective improvement directions for social commerce companies by presenting current business problems and its solution plans. For this, This study selected Korea's representative social commerce business Ticket Monster, which is dominant in sales and staff size together with its excellent funding power through M&A by stock exchange with the US social commerce business Living Social with Amazon.com as a shareholder in August, 2011, as a target group of social commerce service provider. we have gathered questionnaires from both service providers and the users from October 22, 2012 until October 31, 2012 to conduct an empirical analysis. We surveyed 160 service providers of Ticket Monster We also surveyed 160 social commerce users who have experienced in using Ticket Monster service. Out of 320 surveys, 20 questionaries which were unfit or undependable were discarded. Consequently the remaining 300(service provider 150, user 150)were used for this empirical study. The statistics were analyzed using SPSS 12.0. Implications of the empirical analysis result of this study are as follows: First of all, There are order differences in the importance of social commerce CSF between two groups. While service providers regard Price Economic as the most important CSF influencing purchasing intention, the users regard 'Trust' as the most important CSF influencing purchasing intention. This means that the service providers have to utilize the unique strong point of social commerce which make the customers be trusted rathe than just focusing on selling product at a discounted price. It means that service Providers need to enhance effective communication skills by using SNS and play a vital role as a trusted adviser who provides curation services and explains the value of products through information filtering. Also, they need to pay attention to preventing consumer damages from deceptive and false advertising. service providers have to create the detailed reward system in case of a consumer damages caused by above problems. It can make strong ties with customers. Second, both service providers and users tend to consider that social commerce CSF influencing purchasing intention are Price Economic, Utility, Trust, and Word of Mouth Effect. Accordingly, it can be learned that users are expecting the benefit from the aspect of prices and economy when using social commerce, and service providers should be able to suggest the individualized discount benefit through diverse methods using social network service. Looking into it from the aspect of usefulness, service providers are required to get users to be cognizant of time-saving, efficiency, and convenience when they are using social commerce. Therefore, it is necessary to increase the usefulness of social commerce through the introduction of a new management strategy, such as intensification of search engine of the Website, facilitation in payment through shopping basket, and package distribution. Trust, as mentioned before, is the most important variable in consumers' mind, so it should definitely be managed for sustainable management. If the trust in social commerce should fall due to consumers' damage case due to false and puffery advertising forgeries, it could have a negative influence on the image of the social commerce industry in general. Instead of advertising with famous celebrities and using a bombastic amount of money on marketing expenses, the social commerce industry should be able to use the word of mouth effect between users by making use of the social network service, the major marketing method of initial social commerce. The word of mouth effect occurring from consumers' spontaneous self-marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers; in this context, the word of mouth effect should be managed as the CSF of social commerce. Third, Trade safety was not derived as one of the CSF. Recently, with e-commerce like social commerce and Internet shopping increasing in a variety of methods, the importance of trade safety on the Internet also increases, but in this study result, trade safety wasn't evaluated as CSF of social commerce by both groups. This study judges that it's because both service provider groups and user group are perceiving that there is a reliable PG(Payment Gateway) which acts for e-payment of Internet transaction. Accordingly, it is understood that both two groups feel that social commerce can have a corporate identity by website and differentiation in products and services in sales, but don't feel a big difference by business in case of e-payment system. In other words, trade safety should be perceived as natural, basic universal service. Fourth, it's necessary that service providers should intensify the communication with users by making use of social network service which is the major marketing method of social commerce and should be able to use the word of mouth effect between users. The word of mouth effect occurring from consumers' spontaneous self- marketer's duty performance can bring not only reduction effect in advertising cost to a service provider but it can also prepare the basis of discounted price suggestion to consumers. in this context, it is judged that the word of mouth effect should be managed as CSF of social commerce. In this paper, the characteristics of social commerce are limited as five independent variables, however, if an additional study is proceeded with more various independent variables, more in-depth study results will be derived. In addition, this research targets social commerce service providers and the users, however, in the consideration of the fact that social commerce is a two-sided market, drawing CSF through an analysis of perception gap between social commerce service providers and its advertisement clients would be worth to be dealt with in a follow-up study.

Second allogeneic hematopoietic stem cell transplantation in children to overcome graft failure or relapse after initial transplant (조혈모세포이식 후 생착 실패나 재발한 소아환자에서 2차 이식의 의의)

  • Kim, Dong-Yeon;Kim, Do Kyun;Kim, Soo Young;Kim, Seok Joo;Han, Dong Gyun;Baek, Hee Jo;Kook, Hoon;Hwang, Tai-Ju
    • Clinical and Experimental Pediatrics
    • /
    • v.49 no.12
    • /
    • pp.1329-1339
    • /
    • 2006
  • Purpose : Failure of hematopoietic stem cell transplantation(HSCT) may be encountered in practice because of either relapse of the malignancy or dysfunction of the graft. Second HSCT may be the only option for some patients whose initial HSCT failed. Methods : From May, 1991 to December, 2004, 115 HSCTs were performed at the Pediatric Blood & Marrow Transplantation Center, Chonnam National University. This study was a retrospective analysis of the medical records of 15 patients who received the second HSCT after initial graft. Results : Among eight patients with nonmalignant diseases, two patients underwent the second HSCT because of primary graft failure and five because of late graft rejection. The remaining Fanconi anemia patient was re-transplanted due to development of AML. Two patients died and one experienced primary graft failure, but is still alive. The Kaplan-Meier 5-year overall survival rate was 75 percent and the disease free survival rate was 62.5 percent in nonmalignant diseases. All malignant patients underwent second transplants because of relapses. Four died of relapse and one of treatment-related complications. The Kaplan-Meier 2-year overall and event free survival rate was 28.6 percent each in malignant diseases. Conclusion : Second HSCT for graft dysfunction of nonmalignant disease seems to be feasible and should be considered as a standard practice. The relapse of malignant diseases remains a big obstacle even after the second HSCT, although a small portion of patients might be salvaged. Further investigation of novel therapeutic strategies, as well an the understanding of the biology should be explored.