• Title/Summary/Keyword: Monitoring Systems

Search Result 4,446, Processing Time 0.036 seconds

An Analysis on the Usability of Unmanned Aerial Vehicle(UAV) Image to Identify Water Quality Characteristics in Agricultural Streams (농업지역 소하천의 수질 특성 파악을 위한 UAV 영상 활용 가능성 분석)

  • Kim, Seoung-Hyeon;Moon, Byung-Hyun;Song, Bong-Geun;Park, Kyung-Hun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.22 no.3
    • /
    • pp.10-20
    • /
    • 2019
  • Irregular rainfall caused by climate change, in combination with non-point pollution, can cause water systems worldwide to suffer from frequent eutrophication and algal blooms. This type of water pollution is more common in agricultural prone to water system inflow of non-point pollution. Therefore, in this study, the correlation between Unmanned Aerial Vehicle(UAV) multi-spectral images and total phosphorus, total nitrogen, and chlorophyll-a with indirect association of algal blooms, was analyzed to identify the usability of UAV image to identify water quality characteristics in agricultural streams. The analysis the vegetation index Normalized Differences Index (NDVI), the Normalized Differences Red Edge(NDRE), and the Chlorophyll Index Red Edge(CIRE) for the detection of multi-spectral images and algal blooms collected from the target regions Yang cheon and Hamyang Wicheon. The analysis of the correlation between image values and water quality analysis values for the water sampling points, total phosphorus at a significance level of 0.05 was correlated with the CIRE(0.66), and chlorophyll-a showed correlation with Blue(-0.67), Green(-0.66), NDVI(0.75), NDRE (0.67), CIRE(0.74). Total nitrogen was correlated with the Red(-0.64), Red edge (-0.64) and Near-Infrared Ray(NIR)(-0.72) wavelength at the significance level of 0.05. The results of this study confirmed a significant correlations between multi-spectral images collected through UAV and the factors responsible for water pollution, In the case of the vegetation index used for the detection of algal bloom, the possibility of identification of not only chlorophyll-a but also total phosphorus was confirmed. This data will be used as a meaningful data for counterplan such as selecting non-point pollution apprehensive area in agricultural area.

Conservation Scientific Diagnosis and Evaluation of Bird Track Sites from the Haman Formation at Yongsanri in Haman, Korea (함안 용산리 함안층 새발자국 화석산지의 보존과학적 진단 및 평가)

  • Lee, Gyu Hye;Park, Jun Hyoung;Lee, Chan Hee
    • Korean Journal of Heritage: History & Science
    • /
    • v.52 no.3
    • /
    • pp.74-93
    • /
    • 2019
  • The Bird Track Site in the Haman Formation in Yongsanri (Natural Monument No. 222) was reported on the named Koreanaornis hamanensis and Jindongornipes kimi sauropod footprint Brontopodus and ichnospecies Ochlichnus formed by Nematoda. This site has outstanding academic value because it is where the second-highest number of bird tracks have been reported in the world. However, only 25% of the site remains after being designated a natural monument in 1969. This is due to artificial damage caused by worldwide fame and quarrying for flat stone used in Korean floor heating systems. The Haman Formation, including this fossil site, has lithofacies showing reddish-grey siltstone and black shale, alternately. The boundary of the two rocks is progressive, and sedimentary structures like ripple marks and sun cracks can clearly be found. This site was divided into seven formations according to sedimentary sequences and structures. The results of a nondestructive deterioration evaluation showed that chemical and biological damage rates were very low for all formations. Also, physical damage displayed low rates with 0.49% on exfoliation, 0.04% on blistering, 0.28% on break-out; however, the joint crack index was high, 6.20. Additionally, efflorescence was observed on outcrops at the backside and the northwestern side. Physical properties measured by an indirect ultrasonic analysis were found to be moderately weathered (MW). Above all, the southeastern side was much fresher, though some areas around the column of protection facility appeared more weathered. Furthermore, five kinds of discontinuity surface can be found at this site, with the bedding plane showing the higher share. There is the possibility of toppling failure occurring at this site but stable on plane and wedge failure by means of stereographic projection. We concluded that the overall level of deterioration and stability were relatively fine. However, continuous monitoring and conservation treatment and management should be performed as situations such as the physicochemical weathering of the fossil layer, and the efflorescence of the mortar adjoining the protection facility's column appear to be challenging to control.

A Recidivism Prediction Model Based on XGBoost Considering Asymmetric Error Costs (비대칭 오류 비용을 고려한 XGBoost 기반 재범 예측 모델)

  • Won, Ha-Ram;Shim, Jae-Seung;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.127-137
    • /
    • 2019
  • Recidivism prediction has been a subject of constant research by experts since the early 1970s. But it has become more important as committed crimes by recidivist steadily increase. Especially, in the 1990s, after the US and Canada adopted the 'Recidivism Risk Assessment Report' as a decisive criterion during trial and parole screening, research on recidivism prediction became more active. And in the same period, empirical studies on 'Recidivism Factors' were started even at Korea. Even though most recidivism prediction studies have so far focused on factors of recidivism or the accuracy of recidivism prediction, it is important to minimize the prediction misclassification cost, because recidivism prediction has an asymmetric error cost structure. In general, the cost of misrecognizing people who do not cause recidivism to cause recidivism is lower than the cost of incorrectly classifying people who would cause recidivism. Because the former increases only the additional monitoring costs, while the latter increases the amount of social, and economic costs. Therefore, in this paper, we propose an XGBoost(eXtream Gradient Boosting; XGB) based recidivism prediction model considering asymmetric error cost. In the first step of the model, XGB, being recognized as high performance ensemble method in the field of data mining, was applied. And the results of XGB were compared with various prediction models such as LOGIT(logistic regression analysis), DT(decision trees), ANN(artificial neural networks), and SVM(support vector machines). In the next step, the threshold is optimized to minimize the total misclassification cost, which is the weighted average of FNE(False Negative Error) and FPE(False Positive Error). To verify the usefulness of the model, the model was applied to a real recidivism prediction dataset. As a result, it was confirmed that the XGB model not only showed better prediction accuracy than other prediction models but also reduced the cost of misclassification most effectively.

Analysis of News Agenda Using Text mining and Semantic Network Analysis: Focused on COVID-19 Emotions (텍스트 마이닝과 의미 네트워크 분석을 활용한 뉴스 의제 분석: 코로나 19 관련 감정을 중심으로)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.47-64
    • /
    • 2021
  • The global spread of COVID-19 around the world has not only affected many parts of our daily life but also has a huge impact on many areas, including the economy and society. As the number of confirmed cases and deaths increases, medical staff and the public are said to be experiencing psychological problems such as anxiety, depression, and stress. The collective tragedy that accompanies the epidemic raises fear and anxiety, which is known to cause enormous disruptions to the behavior and psychological well-being of many. Long-term negative emotions can reduce people's immunity and destroy their physical balance, so it is essential to understand the psychological state of COVID-19. This study suggests a method of monitoring medial news reflecting current days which requires striving not only for physical but also for psychological quarantine in the prolonged COVID-19 situation. Moreover, it is presented how an easier method of analyzing social media networks applies to those cases. The aim of this study is to assist health policymakers in fast and complex decision-making processes. News plays a major role in setting the policy agenda. Among various major media, news headlines are considered important in the field of communication science as a summary of the core content that the media wants to convey to the audiences who read it. News data used in this study was easily collected using "Bigkinds" that is created by integrating big data technology. With the collected news data, keywords were classified through text mining, and the relationship between words was visualized through semantic network analysis between keywords. Using the KrKwic program, a Korean semantic network analysis tool, text mining was performed and the frequency of words was calculated to easily identify keywords. The frequency of words appearing in keywords of articles related to COVID-19 emotions was checked and visualized in word cloud 'China', 'anxiety', 'situation', 'mind', 'social', and 'health' appeared high in relation to the emotions of COVID-19. In addition, UCINET, a specialized social network analysis program, was used to analyze connection centrality and cluster analysis, and a method of visualizing a graph using Net Draw was performed. As a result of analyzing the connection centrality between each data, it was found that the most central keywords in the keyword-centric network were 'psychology', 'COVID-19', 'blue', and 'anxiety'. The network of frequency of co-occurrence among the keywords appearing in the headlines of the news was visualized as a graph. The thickness of the line on the graph is proportional to the frequency of co-occurrence, and if the frequency of two words appearing at the same time is high, it is indicated by a thick line. It can be seen that the 'COVID-blue' pair is displayed in the boldest, and the 'COVID-emotion' and 'COVID-anxiety' pairs are displayed with a relatively thick line. 'Blue' related to COVID-19 is a word that means depression, and it was confirmed that COVID-19 and depression are keywords that should be of interest now. The research methodology used in this study has the convenience of being able to quickly measure social phenomena and changes while reducing costs. In this study, by analyzing news headlines, we were able to identify people's feelings and perceptions on issues related to COVID-19 depression, and identify the main agendas to be analyzed by deriving important keywords. By presenting and visualizing the subject and important keywords related to the COVID-19 emotion at a time, medical policy managers will be able to be provided a variety of perspectives when identifying and researching the regarding phenomenon. It is expected that it can help to use it as basic data for support, treatment and service development for psychological quarantine issues related to COVID-19.

Present Status of the Quality Assurance and Control (QA/QC) for Korean Macrozoobenthic Biological Data and Suggestions for its Improvement (해양저서동물의 정량적 자료에 대한 정도관리 현실과 개선안)

  • CHOI, JIN-WOO;KHIM, JONG SEONG;SONG, SUNG JOON;RYU, JONGSEONG;KWON, BONG-OH
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.3
    • /
    • pp.263-276
    • /
    • 2021
  • Marine benthic organisms have been used as the indicators for the environment assessment and recently considered as a very important component in the biodiversity and ecosystem restoration. In Korean waters, the quantitative data on marine benthos was used as one of major components for the marine pollution assessment for 50 years since 1970s. The species identification which is an important factor for the quantitative biological data was mainly performed by the marine benthic ecologists. This leads to the deterioration of the data quality on marine benthos from the misidentication of major taxonomic groups due to the lack of taxonomic expertise in Korea. This taxonomic problem has not been solved until now and remains in most data from national research projects on the marine ecosystems in Korean waters. Here we introduce the quality assurance and control (QA/QC) system for the marine biological data in UK, that is, NMBAQC (Northeast Atlantic Marine Biological Analytic and Quality Control) Scheme which has been performed by private companies to solve similar species identification problems in UK. This scheme asks for all marine laboratories which want to participate to any national monitoring programs in UK to keep their identification potency at high level by the internal quality assurance systems and provides a series of taxonomic workshops and literature to increase their capability. They also performs the external quality control for the marine laboratories by performing the Ring Test using standard specimens on various faunal groups. In the case of Korea, there are few taxonomic expertise in two existing national institutions and so they can't solve the taxonomic problems in marine benthic fauna data. We would like to provide a few necessary suggestions to solve the taxonomic problems in Korean marine biological data in short-terms and long-terms: (1) the identification of all dominant species in marine biological data should be confirmed by taxonomic expertise, (2) all the national research programs should include taxonomic experts, and (3) establishing a private company, like the Korea marine organism identification association (KMOIA), which can perform the QA/QC system on the marine organisms and support all Korean marine laboratories by providing taxonomic literature and species identification workshops to enhance their potency. The last suggestion needs more efforts and time for the establishment of that taxonomic company by gathering the detailed contents and related opinions from diverse stakeholders in Korea.

A Study on the Retrieval of River Turbidity Based on KOMPSAT-3/3A Images (KOMPSAT-3/3A 영상 기반 하천의 탁도 산출 연구)

  • Kim, Dahui;Won, You Jun;Han, Sangmyung;Han, Hyangsun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1285-1300
    • /
    • 2022
  • Turbidity, the measure of the cloudiness of water, is used as an important index for water quality management. The turbidity can vary greatly in small river systems, which affects water quality in national rivers. Therefore, the generation of high-resolution spatial information on turbidity is very important. In this study, a turbidity retrieval model using the Korea Multi-Purpose Satellite-3 and -3A (KOMPSAT-3/3A) images was developed for high-resolution turbidity mapping of Han River system based on eXtreme Gradient Boosting (XGBoost) algorithm. To this end, the top of atmosphere (TOA) spectral reflectance was calculated from a total of 24 KOMPSAT-3/3A images and 150 Landsat-8 images. The Landsat-8 TOA spectral reflectance was cross-calibrated to the KOMPSAT-3/3A bands. The turbidity measured by the National Water Quality Monitoring Network was used as a reference dataset, and as input variables, the TOA spectral reflectance at the locations of in situ turbidity measurement, the spectral indices (the normalized difference vegetation index, normalized difference water index, and normalized difference turbidity index), and the Moderate Resolution Imaging Spectroradiometer (MODIS)-derived atmospheric products(the atmospheric optical thickness, water vapor, and ozone) were used. Furthermore, by analyzing the KOMPSAT-3/3A TOA spectral reflectance of different turbidities, a new spectral index, new normalized difference turbidity index (nNDTI), was proposed, and it was added as an input variable to the turbidity retrieval model. The XGBoost model showed excellent performance for the retrieval of turbidity with a root mean square error (RMSE) of 2.70 NTU and a normalized RMSE (NRMSE) of 14.70% compared to in situ turbidity, in which the nNDTI proposed in this study was used as the most important variable. The developed turbidity retrieval model was applied to the KOMPSAT-3/3A images to map high-resolution river turbidity, and it was possible to analyze the spatiotemporal variations of turbidity. Through this study, we could confirm that the KOMPSAT-3/3A images are very useful for retrieving high-resolution and accurate spatial information on the river turbidity.

A Checklist to Improve the Fairness in AI Financial Service: Focused on the AI-based Credit Scoring Service (인공지능 기반 금융서비스의 공정성 확보를 위한 체크리스트 제안: 인공지능 기반 개인신용평가를 중심으로)

  • Kim, HaYeong;Heo, JeongYun;Kwon, Hochang
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.3
    • /
    • pp.259-278
    • /
    • 2022
  • With the spread of Artificial Intelligence (AI), various AI-based services are expanding in the financial sector such as service recommendation, automated customer response, fraud detection system(FDS), credit scoring services, etc. At the same time, problems related to reliability and unexpected social controversy are also occurring due to the nature of data-based machine learning. The need Based on this background, this study aimed to contribute to improving trust in AI-based financial services by proposing a checklist to secure fairness in AI-based credit scoring services which directly affects consumers' financial life. Among the key elements of trustworthy AI like transparency, safety, accountability, and fairness, fairness was selected as the subject of the study so that everyone could enjoy the benefits of automated algorithms from the perspective of inclusive finance without social discrimination. We divided the entire fairness related operation process into three areas like data, algorithms, and user areas through literature research. For each area, we constructed four detailed considerations for evaluation resulting in 12 checklists. The relative importance and priority of the categories were evaluated through the analytic hierarchy process (AHP). We use three different groups: financial field workers, artificial intelligence field workers, and general users which represent entire financial stakeholders. According to the importance of each stakeholder, three groups were classified and analyzed, and from a practical perspective, specific checks such as feasibility verification for using learning data and non-financial information and monitoring new inflow data were identified. Moreover, financial consumers in general were found to be highly considerate of the accuracy of result analysis and bias checks. We expect this result could contribute to the design and operation of fair AI-based financial services.

A Study on Improvement Plans for Local Safety Assessment in Korea (국내 지역안전도 평가의 개선방안 연구)

  • Kim, Yong-Moon
    • Journal of Korean Society of Disaster and Security
    • /
    • v.14 no.4
    • /
    • pp.69-80
    • /
    • 2021
  • This study tried to suggest improvement measures by discovering problems or matters requiring improvement among the annual regional safety evaluation systems. Briefly introducing the structure and contents of the study, which is the introduction, describes the regional safety evaluation method newly applied by the Ministry of Public Administration and Security in 2020. Utilization plans were also introduced according to the local safety level that was finally evaluated by the local government. In this paper, various views of previous researchers related to regional safety are summarized and described. In addition, problems were drawn in the composition of the index of local safety, the method of calculating the index, and the application of the current index. Next, the problems of specific regional safety evaluation indicators were analyzed and solutions were presented. First, "Number of semi-basement households" is replaced with "Number of households receiving basic livelihood" of 「Social Vulnerability Index」 in the field of disaster risk factors is replaced with "the number of households receiving basic livelihood". In addition, the "Vinyl House Area" is evaluated by replacing "the number of households living in a Vinyl House, the number of container households, and the number of households in Jjok-bang villages" with data. Second, in the management and evaluation of habitual drought disaster areas, local governments with a water supply rate of 95% or higher in Counties, Cities, and Districts are treated as "missing". This is because drought disasters rarely occur in the metropolitan area and local governments that have undergone urbanization. Third, the activities of safety sheriffs, safety monitor volunteers, and disaster safety silver monitoring groups along with the local autonomous prevention foundation are added to the evaluation of the evaluation index of 「Regional Autonomous Prevention Foundation Activation」 in the field of response to disaster prevention measures. However, since the name of the local autonomous disaster prevention organization may be different for each local government, if it is an autonomous disaster prevention organization organized and active for disaster prevention, it would be appropriate to evaluate the results by summing up all of its activities. Fourth, among the Scorecard evaluation items, which is a safe city evaluation tool used by the United Nations Office for Disaster Risk Reduction(UNDRR), the item "preservation of natural buffers to strengthen the protection functions provided by natural ecosystems" is borrowed, which is closely related to natural disasters. The Scorecard evaluation is an assessment index that focuses on improving the disaster resilience of local governments while carrying out the campaign "Creating cities resilient to climate crises and disasters" emphasized by UNDRR. Finally, the names of "regional safety level" and "local safety index" are similar, so the term of local safety level is changed to "natural disaster safety level" or "natural calamity safety level". This is because only the general public can distinguish the local safety level from the local safety index.

Video Analysis System for Action and Emotion Detection by Object with Hierarchical Clustering based Re-ID (계층적 군집화 기반 Re-ID를 활용한 객체별 행동 및 표정 검출용 영상 분석 시스템)

  • Lee, Sang-Hyun;Yang, Seong-Hun;Oh, Seung-Jin;Kang, Jinbeom
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.89-106
    • /
    • 2022
  • Recently, the amount of video data collected from smartphones, CCTVs, black boxes, and high-definition cameras has increased rapidly. According to the increasing video data, the requirements for analysis and utilization are increasing. Due to the lack of skilled manpower to analyze videos in many industries, machine learning and artificial intelligence are actively used to assist manpower. In this situation, the demand for various computer vision technologies such as object detection and tracking, action detection, emotion detection, and Re-ID also increased rapidly. However, the object detection and tracking technology has many difficulties that degrade performance, such as re-appearance after the object's departure from the video recording location, and occlusion. Accordingly, action and emotion detection models based on object detection and tracking models also have difficulties in extracting data for each object. In addition, deep learning architectures consist of various models suffer from performance degradation due to bottlenects and lack of optimization. In this study, we propose an video analysis system consists of YOLOv5 based DeepSORT object tracking model, SlowFast based action recognition model, Torchreid based Re-ID model, and AWS Rekognition which is emotion recognition service. Proposed model uses single-linkage hierarchical clustering based Re-ID and some processing method which maximize hardware throughput. It has higher accuracy than the performance of the re-identification model using simple metrics, near real-time processing performance, and prevents tracking failure due to object departure and re-emergence, occlusion, etc. By continuously linking the action and facial emotion detection results of each object to the same object, it is possible to efficiently analyze videos. The re-identification model extracts a feature vector from the bounding box of object image detected by the object tracking model for each frame, and applies the single-linkage hierarchical clustering from the past frame using the extracted feature vectors to identify the same object that failed to track. Through the above process, it is possible to re-track the same object that has failed to tracking in the case of re-appearance or occlusion after leaving the video location. As a result, action and facial emotion detection results of the newly recognized object due to the tracking fails can be linked to those of the object that appeared in the past. On the other hand, as a way to improve processing performance, we introduce Bounding Box Queue by Object and Feature Queue method that can reduce RAM memory requirements while maximizing GPU memory throughput. Also we introduce the IoF(Intersection over Face) algorithm that allows facial emotion recognized through AWS Rekognition to be linked with object tracking information. The academic significance of this study is that the two-stage re-identification model can have real-time performance even in a high-cost environment that performs action and facial emotion detection according to processing techniques without reducing the accuracy by using simple metrics to achieve real-time performance. The practical implication of this study is that in various industrial fields that require action and facial emotion detection but have many difficulties due to the fails in object tracking can analyze videos effectively through proposed model. Proposed model which has high accuracy of retrace and processing performance can be used in various fields such as intelligent monitoring, observation services and behavioral or psychological analysis services where the integration of tracking information and extracted metadata creates greate industrial and business value. In the future, in order to measure the object tracking performance more precisely, there is a need to conduct an experiment using the MOT Challenge dataset, which is data used by many international conferences. We will investigate the problem that the IoF algorithm cannot solve to develop an additional complementary algorithm. In addition, we plan to conduct additional research to apply this model to various fields' dataset related to intelligent video analysis.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.