• Title/Summary/Keyword: Flow System

Search Result 14,975, Processing Time 0.055 seconds

Captive Affects, Elastic Sufferings, Vicarious Objects in Melodrama -Refiguring Melodrama by Agustin Zarzosa (멜로드라마 속의 사로잡힌 정동(Captive Affects), 탄력적 고통(Elastic Sufferings), 대리적 대상(Vicarious Objects) -어구스틴 잘조사의 멜로드라마 재고)

  • Ahn, Min-Hwa
    • Journal of Popular Narrative
    • /
    • v.25 no.1
    • /
    • pp.429-462
    • /
    • 2019
  • This paper argues how the concept of melodrama can be articulated with the Affect Theory and Posthumanism in relation to animal or environment representation which have emerged as the new topics of the recent era. The argument will be made through the discussion of Agustin Zarzosa's book, Refiguring Melodrama in Film and Television: Captitve Affects, Elastic Sufferings, Vicarious Objects. Using a genealogical approach, the book revisits the notion of mode, affect, suffering (hysteria), and excess which have been dealt with in the existing studies of melodrama. In chapter one, he broadens the concept of melodrama as a mode into the means of redistribution of suffering across the whole society in the mechanism of the duo of evil and virtue. It is the opposition of Brooks's argument in which melodrama functions as the means of proving the distinction between evil and virtue. Chapter two focuses on the fact that melodrama is an elastic system of specification rather than a system of signification, with the perspective of Deleuzian metaphysics. Through the analysis of Home from the Hill (Vincente Minnelli, 1959), this chapter pays attention to an 'affect' generated by the encounters between the bodies and the Mise-en-Scène as a flow not of a meaning but of an affect. Chapter three argues that melodrama should reveal an unloved (woman's) suffering, opposing the discussion on the role of melodrama as the recovery of moral order. Safe (Todd Haynes, 1995), dealing with female suffering caused by the industrial and social environment, elaborates on the arguments on melodrama in relation to female hysteria with ecocritical standpoints. The rest of the two chapters discusses the role of melodrama for the limitation and extension of the notion of the human through 'animal' and 'posthuman' melodrama. It argues that the concept of melodrama as 'excess' and 'sacrifice' blurs the boundary between human and inhuman. In summary, although the author Zarzosa partly agrees with Peter Brook's notion of mode, affect and sufferings,he elaborates the concept of melodrama, by articulating philosophical arguments such as Deleuzianism, feminism, and posthumanism (Akira Lippit and Carry Wolf) with the melodrama. Thefore, Zarzosa challenges the concepts of melodrama led by Brooks, which had been canonical in the field.

.A Study on Parents' Transnational Educational Passion in the Tendency of Globalization : The Potential and Limitations of Educational Nomadism (세계화의 흐름에서 학부모의 초국가적 교육열 - 교육노마디즘의 가능성과 한계를 중심으로 -)

  • Kim, So-Hee
    • Korean Journal of Culture and Arts Education Studies
    • /
    • v.5 no.1
    • /
    • pp.97-147
    • /
    • 2010
  • Under the recent trend of globalization, a new proposal on education has not been able to avoid the request for multi-cultural trend. Furthermore, education has been exposed to circumstances which are far different from the previous situations in which global cooperation and intercultural understanding have been more emphasized. 'Educational Nomadism'is a metaphor of creating new value and significance of education. In fact, transnational education which could be a crisis and opportunity at the same time has recently been the mainstream throughout the world. In terms of education, Korea has encountered base hollowing-out in which excessive dependence on the US education and autonomous education coexist. In fact, the world has spent a lot of time and money to have better educational background on a resume through redundant expense by the government and parents. Under this critical situation, it's urgent to change Korea's modern education into a creative educational system in connection with an advanced foreign educational system and further develop the advantage of Korea's education. A parent's investment in his/her child is a support to create new culture as well as an assistance for hope and better future of Korean education. A new direction of parents' education fever that has opened a door to global communitas can stir up infinite potential through which the flow of education fever can be changed to the resources of new civilization. The global cooperation and efforts for communitas means the communication with this world. Through this communication, the culture in which people are forced to zero-sum competition can leap into the education for change of civilization which creates pleasure of self sufficiency and donation.

Comparison and Analysis of Field Hydraulic Tests to Evaluate Hydraulic Characteristics in Deep Granite Rockmass (심부 화강암반의 수리특성 평가를 위한 현장수리시험 비교 및 해석 연구)

  • Dae-Sung Cheon;Heejun Suk;Seong Kon Lee;Tae-Hee Kim;Ki Seog Kim;Seong-Chun Jun;SeongHo Bae
    • Tunnel and Underground Space
    • /
    • v.34 no.4
    • /
    • pp.393-412
    • /
    • 2024
  • In selecting a disposal site for high-level radioactive waste, the hydrogeological research of the site is very important, and the hydraulic conductivity and the storage coefficient are key parameters. In this study, the hydraulic conductivity obtained by two different types of field hydraulic test equipment and methods was compared and analyzed for the deep granite rockmass in the Wonju area to understand the hydraulic characteristics of the deep granite rockmass. One was to perform the lugeon test, constant pressure injection test, and slug test at a maximum depth of 602.0 m by using the auto pressure/flow injection system, and the calculated hydraulic conductivity ranged from 1.26E-9 to 4.16E-8 m/s. In the overall depth, the maximum and minimum differences of the hydraulic conductivity were found to be about 33 times, and in the same test section, the difference by test method or analysis method was 1.13 to 8.25 times. In the other, the hydraulic conductivity calculated by performing a constant pressure injection test and a pulse test at a maximum depth of 705.1 m using the deep borehole hydraulic testing system was found to be 1.60E-10 to 2.05E-8 m/s, and the maximum and minimum differences were found to be about 130 times. In the constant pressure injection test, the difference depending on the analysis method was found to be 1.02 to 2.8 times. The hydraulic conductivity calculated by the two test equipment and methods generally showed similar ranges as E-9 and E-8 m/s, and no clear trend was observed according to depth. It was found that the granite rockmass in the Wonju area where the field hydraulic test was conducted showed low or very low rockmass permeability, and although there are differences in the range of hydraulic conductivity and the depth of application that can be measured depending on the applied test equipment and test method, it is generally believed that reliable results were presented.

Summer-Time Behaviour and Flux of Suspended Sediments at the Entrance to Semi-Closed Hampyung Bay, Southwestern Coast of Korea (만 입구에서 부유퇴적물 거동과 플럭스: 한반도 서해 남부 함평만의 여름철 특성)

  • Lee, Hee-Jun;Park, Eun-Sun;Lee, Yeon-Gyu;Jeong, Kap-Sik;Chu, Yong-Shik
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.5 no.2
    • /
    • pp.105-118
    • /
    • 2000
  • Anchored measurements (12.5 hr) of suspended sediment concentration and other hydrodynamic parameters were carried out at two stations located at the entrance to Hampyung Bay in summer (August 1999). Tidal variations in water temperature and salinity were in the range of 26.0-27.9$^{\circ}C$ and 30.9-31.5, respectively, indicating exchange offshore and offshore water mass. Active tidal mixing processes at the entrance appear to destroy the otherwise vertical stratification in temperature and salinity in spite of strong solar heating in summer. On the contrary, suspended sediment concentrations show a marked stratification with increasing concentrations toward bottom layer. Clastic particles in suspended sediments consist mostly of very fine to fine silt (4-16 ${\mu}$m) with a poorly-sorted value of 14.7-25.9 ${\mu}$m. However, at slack time with less turbulent energy, flocs larger than 40 ${\mu}$m are formed by cohesion and inter-collision of particles, resulting in a higher settling velocity. Strong ebb-dominated and weak flood dominated tidal currents, in the southwestern and the northeastern part, respectively, result in a seaward residual flow of -10${\sim}$-20 cm $s^{-1}$ at station H1 and a bayward residual flow less than 5.0 cm $s^{-1}$ at station H2. However, mean concentration of suspended sediments at station H1 is higher at flood (95.0-144.1 mg $1^{-1}$) than in ebb (75.8-120.9 mg $1^{-1}$). On the contrary, at the station H2, the trend is reversed with higher concentration at the ebb (84.7-158.4 mg $1^{-1}$) than that at the flood (53.0-107.9 mg $1^{-1}$). As a result, seaward net suspended sediment fluxes ($f_{s}$) are calculated to be -1.7 ${\sim}$-$15.610^{3}$ kg $m^{-2}$ $s^{-1}$ through the whole water column. However, the stations H1 and H2 show definitely different values of the flux with higher ones in the former than in the latter. Alternatively, depth-integrated net suspended sediment loads ($\c{Q}_{s}$) for one tidal cycle are also toward the offshore with ranges of 0.37${\times}$$10^{3}$ kg $m^{-1}$ and 0.21${\times}$$10^{3}$ kg $m^{-1}$, at station H1 and H2, respectively. This seaward transport of suspended sediment in summer suggests that summer-time erosion in the Hampyung muddy tidal flats is a rather exceptional phenomenon compared to the general deposition reported for many other tidal flats on the west coast of Korea.

  • PDF

A Study on the Differences of Information Diffusion Based on the Type of Media and Information (매체와 정보유형에 따른 정보확산 차이에 대한 연구)

  • Lee, Sang-Gun;Kim, Jin-Hwa;Baek, Heon;Lee, Eui-Bang
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.133-146
    • /
    • 2013
  • While the use of internet is routine nowadays, users receive and share information through a variety of media. Through the use of internet, information delivery media is diversifying from traditional media of one-way communication, such as newspaper, TV, and radio, into media of two-way communication. In contrast of traditional media, blogs enable individuals to directly upload and share news, which can be considered to have a differential speed of information diffusion than news media that convey information unilaterally. Therefore this Study focused on the difference between online news and social media blogs. Moreover, there are variations in the speed of information diffusion because that information closely related to one person boosts communications between individuals. We believe that users' standard of evaluation would change based on the types of information. As well, the speed of information diffusion would change based on the level of proximity. Therefore, the purpose of this study is to examine the differences in information diffusion based on the types of media. And then information is segmentalized and an examination is done to see how information diffusion differentiates based on the types of information. This study used the Bass diffusion model, which has been frequently used because this model has higher explanatory power than other models by explaining diffusion of market through innovation effect and imitation effect. Also this model has been applied a lot in other information diffusion related studies. The Bass diffusion model includes an innovation effect and an imitation effect. Innovation effect measures the early-stage impact, while the imitation effect measures the impact of word of mouth at the later stage. According to Mahajan et al. (2000), Innovation effect is emphasized by usefulness and ease-of-use, as well Imitation effect is emphasized by subjective norm and word-of-mouth. Also, according to Lee et al. (2011), Innovation effect is emphasized by mass communication. According to Moore and Benbasat (1996), Innovation effect is emphasized by relative advantage. Because Imitation effect is adopted by within-group influences and Innovation effects is adopted by product's or service's innovation. Therefore, ours study compared online news and social media blogs to examine the differences between media. We also choose different types of information including entertainment related information "Psy Gentelman", Current affair news "Earthquake in Sichuan, China", and product related information "Galaxy S4" in order to examine the variations on information diffusion. We considered that users' information proximity alters based on the types of information. Hence, we chose the three types of information mentioned above, which have different level of proximity from users' standpoint, in order to examine the flow of information diffusion. The first conclusion of this study is that different media has similar effect on information diffusion, even the types of media of information provider are different. Information diffusion has only been distinguished by a disparity between proximity of information. Second, information diffusions differ based on types of information. From the standpoint of users, product and entertainment related information has high imitation effect because of word of mouth. On the other hand, imitation effect dominates innovation effect on Current affair news. From the results of this study, the flow changes of information diffusion is examined and be applied to practical use. This study has some limitations, and those limitations would be able to provide opportunities and suggestions for future research. Presenting the difference of Information diffusion according to media and proximity has difficulties for generalization of theory due to small sample size. Therefore, if further studies adopt to a request for an increase of sample size and media diversity, difference of the information diffusion according to media type and information proximity could be understood more detailed.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Initial Experience of the Emergency Bypass System ($EBS^{(R)}$) for the Patients with Cardiogenic Shock due to an Acute Myocardial Infarction (급성 심근경색으로 인한 심인성 쇼크 환자에 대한 경피적 순환 보조장치($EBS^{(R)}$) 적용의 초기경험)

  • Ryu, Kyoung-Min;Kim, Sam-Hyun;Seo, Pil-Won;Ryu, Jae-Wook;Kim, Seok-Kon;Kim, Young-Hwa;Park, Seong-Sik
    • Journal of Chest Surgery
    • /
    • v.41 no.3
    • /
    • pp.329-334
    • /
    • 2008
  • Background: Percutaneous cardiopulmonary support. (PCPS) has the potential to rescue patients in cardiogenic shock who might otherwise die. PCPS has been a therapeutic option in a variety of the clinical settings such as for patients with myocardial Infarction, high-risk coronary intervention and postcardiotomy cardiogenic shock, and the PCPS device is easy to install. We report our early experience with PCPS as a life saving procedure in cardiogenic shock patients due to acute myocardial infarction. Material and Method: From January 2005 to December 2006, eight patients in cardiogenic shock with acute myocardial infarction underwent PCPS using the CAPIOX emergency bypass system($EBS^{(R)}$, Terumo, Tokyo, Japan). Uptake cannulae were inserted deep into the femoral vein up to the right atrium and return cannulae were inserted into the femoral artery with Seldinger techniques using 20 and 16-French cannulae, respectively. Simultaneously, autopriming was performed at the $EBS^{(R)}$ circuit. The $EBS^{(R)}$ flow rate was maintained between $2.5{\sim}3.0L/min/m^2$ and anticoagulation was performed using intravenous heparin with an ACT level above 200 seconds. Result: The mean age of patients was $61.1{\pm}14.2$ years (range, 39 to 77 years). Three patients were under control of the $EBS^{(R)}$ before percutaneous coronary intervention (PCI), three patients were under control of the $EBS^{(R)}$ during PCI, one patient was under control of the $EBS^{(R)}$ after PCI, and one patient was under control of the $EBS^{(R)}$ after coronary bypass surgery. The mean support time was $47.5{\pm}27.9$ hours (range, 8 to 76 hours). Five patients (62.5%) could be weaned from the $EBS^{(R)}$ after $53.6{\pm}27.2$ hours. (range, 12 to 68 hours) of support. All of the patients who could successfully be weaned from support were discharged from the hospital. There were three complications: one case of gastrointestinal bleeding and two cases of acute renal failure. Two of the three mortality cases were under cardiac arrest before $EBS^{(R)}$ support, and one patient had an intractable ventricular arrhythmia during the support. All of the discharged patients are still surviving at $16.8{\pm}3.1$ months (range, 12 to 20 months) of follow-up. Conclusion: The use of $EBS^{(R)}$ for cardiogenic shock caused by an acute myocardial infarction could rescue patients who might otherwise have died. Successfully recovered patients after $EBS^{(R)}$ treatment have survived without severe complications. More experience and additional clinical investigations are necessary to elucidate the proper installation timing and management protocol of the $EBS^{(R)}$ in the future.

Dry etching of polycarbonate using O2/SF6, O2/N2 and O2/CH4 plasmas (O2/SF6, O2/N2와 O2/CH4 플라즈마를 이용한 폴리카보네이트 건식 식각)

  • Joo, Y.W.;Park, Y.H.;Noh, H.S.;Kim, J.K.;Lee, S.H.;Cho, G.S.;Song, H.J.;Jeon, M.H.;Lee, J.W.
    • Journal of the Korean Vacuum Society
    • /
    • v.17 no.1
    • /
    • pp.16-22
    • /
    • 2008
  • We studied plasma etching of polycarbonate in $O_2/SF_6$, $O_2/N_2$ and $O_2/CH_4$. A capacitively coupled plasma system was employed for the research. For patterning, we used a photolithography method with UV exposure after coating a photoresist on the polycarbonate. Main variables in the experiment were the mixing ratio of $O_2$ and other gases, and RF chuck power. Especially, we used only a mechanical pump for in order to operate the system. The chamber pressure was fixed at 100 mTorr. All of surface profilometry, atomic force microscopy and scanning electron microscopy were used for characterization of the etched polycarbonate samples. According to the results, $O_2/SF_6$ plasmas gave the higher etch rate of the polycarbonate than pure $O_2$ and $SF_6$ plasmas. For example, with maintaining 100W RF chuck power and 100 mTorr chamber pressure, 20 sccm $O_2$ plasma provided about $0.4{\mu}m$/min of polycarbonate etch rate and 20 sccm $SF_6$ produced only $0.2{\mu}m$/min. However, the mixed plasma of 60 % $O_2$ and 40 % $SF_6$ gas flow rate generated about $0.56{\mu}m$ with even low -DC bias induced compared to that of $O_2$. More addition of $SF_6$ to the mixture reduced etch of polycarbonate. The surface roughness of etched polycarbonate was roughed about 3 times worse measured by atomic force microscopy. However examination with scanning electron microscopy indicated that the surface was comparable to that of photoresist. Increase of RF chuck power raised -DC bias on the chuck and etch rate of polycarbonate almost linearly. The etch selectivity of polycarbonate to photoresist was about 1:1. The meaning of these results was that the simple capacitively coupled plasma system can be used to make a microstructure on polymer with $O_2/SF_6$ plasmas. This result can be applied to plasma processing of other polymers.

A Study on the Use of GIS-based Time Series Spatial Data for Streamflow Depletion Assessment (하천 건천화 평가를 위한 GIS 기반의 시계열 공간자료 활용에 관한 연구)

  • YOO, Jae-Hyun;KIM, Kye-Hyun;PARK, Yong-Gil;LEE, Gi-Hun;KIM, Seong-Joon;JUNG, Chung-Gil
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.21 no.4
    • /
    • pp.50-63
    • /
    • 2018
  • The rapid urbanization had led to a distortion of natural hydrological cycle system. The change in hydrological cycle structure is causing streamflow depletion, changing the existing use tendency of water resources. To manage such phenomena, a streamflow depletion impact assessment technology to forecast depletion is required. For performing such technology, it is indispensable to build GIS-based spatial data as fundamental data, but there is a shortage of related research. Therefore, this study was conducted to use the use of GIS-based time series spatial data for streamflow depletion assessment. For this study, GIS data over decades of changes on a national scale were constructed, targeting 6 streamflow depletion impact factors (weather, soil depth, forest density, road network, groundwater usage and landuse) and the data were used as the basic data for the operation of continuous hydrologic model. Focusing on these impact factors, the causes for streamflow depletion were analyzed depending on time series. Then, using distributed continuous hydrologic model based DrySAT, annual runoff of each streamflow depletion impact factor was measured and depletion assessment was conducted. As a result, the default value of annual runoff was measured at 977.9mm under the given weather condition without considering other factors. When considering the decrease in soil depth, the increase in forest density, road development, and groundwater usage, along with the change in land use and development, and annual runoff were measured at 1,003.5mm, 942.1mm, 961.9mm, 915.5mm, and 1003.7mm, respectively. The results showed that the major causes of the streaflow depletion were lowered soil depth to decrease the infiltration volume and surface runoff thereby decreasing streamflow; the increased forest density to decrease surface runoff; the increased road network to decrease the sub-surface flow; the increased groundwater use from undiscriminated development to decrease the baseflow; increased impervious areas to increase surface runoff. Also, each standard watershed depending on the grade of depletion was indicated, based on the definition of streamflow depletion and the range of grade. Considering the weather, the decrease in soil depth, the increase in forest density, road development, and groundwater usage, and the change in land use and development, the grade of depletion were 2.1, 2.2, 2.5, 2.3, 2.8, 2.2, respectively. Among the five streamflow depletion impact factors except rainfall condition, the change in groundwater usage showed the biggest influence on depletion, followed by the change in forest density, road construction, land use, and soil depth. In conclusion, it is anticipated that a national streamflow depletion assessment system to be develop in the future would provide customized depletion management and prevention plans based on the system assessment results regarding future data changes of the six streamflow depletion impact factors and the prospect of depletion progress.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.