• Title/Summary/Keyword: 유지관리 성능

Search Result 1,948, Processing Time 0.036 seconds

Index-based Searching on Timestamped Event Sequences (타임스탬프를 갖는 이벤트 시퀀스의 인덱스 기반 검색)

  • 박상현;원정임;윤지희;김상욱
    • Journal of KIISE:Databases
    • /
    • v.31 no.5
    • /
    • pp.468-478
    • /
    • 2004
  • It is essential in various application areas of data mining and bioinformatics to effectively retrieve the occurrences of interesting patterns from sequence databases. For example, let's consider a network event management system that records the types and timestamp values of events occurred in a specific network component(ex. router). The typical query to find out the temporal casual relationships among the network events is as fellows: 'Find all occurrences of CiscoDCDLinkUp that are fellowed by MLMStatusUP that are subsequently followed by TCPConnectionClose, under the constraint that the interval between the first two events is not larger than 20 seconds, and the interval between the first and third events is not larger than 40 secondsTCPConnectionClose. This paper proposes an indexing method that enables to efficiently answer such a query. Unlike the previous methods that rely on inefficient sequential scan methods or data structures not easily supported by DBMSs, the proposed method uses a multi-dimensional spatial index, which is proven to be efficient both in storage and search, to find the answers quickly without false dismissals. Given a sliding window W, the input to a multi-dimensional spatial index is a n-dimensional vector whose i-th element is the interval between the first event of W and the first occurrence of the event type Ei in W. Here, n is the number of event types that can be occurred in the system of interest. The problem of‘dimensionality curse’may happen when n is large. Therefore, we use the dimension selection or event type grouping to avoid this problem. The experimental results reveal that our proposed technique can be a few orders of magnitude faster than the sequential scan and ISO-Depth index methods.hods.

Design of a Crowd-Sourced Fingerprint Mapping and Localization System (군중-제공 신호지도 작성 및 위치 추적 시스템의 설계)

  • Choi, Eun-Mi;Kim, In-Cheol
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.9
    • /
    • pp.595-602
    • /
    • 2013
  • WiFi fingerprinting is well known as an effective localization technique used for indoor environments. However, this technique requires a large amount of pre-built fingerprint maps over the entire space. Moreover, due to environmental changes, these maps have to be newly built or updated periodically by experts. As a way to avoid this problem, crowd-sourced fingerprint mapping attracts many interests from researchers. This approach supports many volunteer users to share their WiFi fingerprints collected at a specific environment. Therefore, crowd-sourced fingerprinting can automatically update fingerprint maps up-to-date. In most previous systems, however, individual users were asked to enter their positions manually to build their local fingerprint maps. Moreover, the systems do not have any principled mechanism to keep fingerprint maps clean by detecting and filtering out erroneous fingerprints collected from multiple users. In this paper, we present the design of a crowd-sourced fingerprint mapping and localization(CMAL) system. The proposed system can not only automatically build and/or update WiFi fingerprint maps from fingerprint collections provided by multiple smartphone users, but also simultaneously track their positions using the up-to-date maps. The CMAL system consists of multiple clients to work on individual smartphones to collect fingerprints and a central server to maintain a database of fingerprint maps. Each client contains a particle filter-based WiFi SLAM engine, tracking the smartphone user's position and building each local fingerprint map. The server of our system adopts a Gaussian interpolation-based error filtering algorithm to maintain the integrity of fingerprint maps. Through various experiments, we show the high performance of our system.

Studies on the Improvement of the Fishing Efficiency of Purse Seine in the Sea Area of Cheju Island -The Changes of Seine Volume and Tension in the Purseline During Pursing- (제주도 주변해역 선망의 어획성능 향상에 관한 연구 -짐줄 체결 중 선망의 용적과 짐줄의 장력 변화 -)

  • 김석종
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.35 no.2
    • /
    • pp.93-101
    • /
    • 1999
  • A simple experimental method was used in an attempt to realize the elevation of the fishing ability of purse seine in the sea area of Cheju Island, the changes of seine volume and tension in the purseline during pursing. Experiments carried out on the six types simplified reduced model seines which were made of knotless nettings. The nettings were woven in different leg length 4.3, 5.0, 5.5, 6.0, 6.6 and 7.7mm of polyester 28 tex two threads two-ply twine, and each of the seines were named I, II, III, IV, V and Ⅵ seine. Dimension of seine models were 450cm for corkline and 85cmfor seine depth, each seines rigged up 160g of float for a floatline and 50g (underwater weight) of lead for a leadline. These model purse seines were made of the scale of 1/200 of its full scale, a 120 ton in the near sea of Cheju Island. Designing and testing for the model purse seines were based on the Tauti's law. Experiments were measured in the observation channel of a flume tank at the static conditions set up shooting and pursing equipments. Motion of purse seine during purse line was recorded by the two sets video camera for VTR which were placed in top and front of the model seine. The reading coordinate of seine volume carried out by the video digitization system, disk data for the purseline tension. An analysis were performed on the changes seine volume and tension in the purseline during pursing. The results obtained were as follows: 1. The seine volume during pursing was largest for Ⅵ seine with smallest d/l followed by V, IV, III, II and I seines, and tension in the purseline was small. 2. Seine volume during pursing can be expressed by the following equation; CVt=l-EXP[{2.79 (d/l)+0.35}t-33.37 (d/l) + 0.57] Where CVt is volume ratio, d is twine diameter, l is leg length and t is pursing time (sec). 3. Tension in the purse line during pursing can be expressed by the following equation; T= 1- EXP {0.57t + 13.36 (d/l)+2.97} Where T is tension (kg) in the purseline during pursing.

  • PDF

Quality Assessment and Comparison of Several Radioimmunoassay Kits and Chemiluminescence Immunoassay Methods for Evaluating Serum Estradiol (혈중 Estradiol 농도 측정을 위한 여러 방사면역측정 검사키트 및 화학면역발광 검사법의 성능평가 및 상호비교)

  • Choi, Sung Hee;Noh, Gyeong Woon;Kim, Jin Eui;Song, Yoo Sung;Paeng, Jin Chul;Kang, Keon Wook;Lee, Dong Soo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.72-80
    • /
    • 2015
  • Purpose Serum estradiol ($E_2$) measurement is requested for diagnosing menstrual cycles, ovulation induction, infertility, and menopause. $E_2$ is measured using several methods and kits including radioimmunoassay (RIA) and chemiluminescece immunoassay (CLIA). The purpose of this study was to evaluate quality of these methods and to compare them with each other. Materials and Methods Seven radioimmunoassay kits and two CLIA methods were included in the analysis. Using standard samples and patient samples, intra-assay precision, inter-assay precision, correlation between other methods, sensitivity, and recovery rate were evaluated. Results For all tested kits and methods, coefficients of variance (CVs) of intra-assay precision test were 10.9~13.6% in low-level samples and less than 10% in medium and high-level samples. CVs of inter-assay precision test were 10.8~12.3% in low-level samples and less than 10% in medium and high-level samples with all tested kits and methods. Recovery rates were $92.7{\pm}12.4%$ for SIEMENS, $101.4{\pm}18.4%$ for DIAsource, $95.1{\pm}11.5%$ for AMP, $108.4{\pm}18.5%$ for BECKMAN COULTER, $104.2{\pm}13.5%$ for BECKMAN COULTER Ultra Sensitive, $101.3{\pm}11.6%$ for CIS Bio, and $93.1{\pm}13.2%$ for MP kits. Sensitivity was 7.5, 6.2, 5.7, 6.2, 5.3, 4.5, and 5.5 pg/mL for SIEMENS, DIAsource, AMP, BECKMAN COULTER, BECKMAN COULTER Ultra Sensitive, CIS Bio, and MP kits, respectively. The measurement by MP kit was slightly higher than those by other kits in low-level samples, and the measurement by E170 was slightly higher than those of other kits in medium and high-level samples. In the measurement of standard sample for external quality control, SIEMENS kit produced relatively lower values whereas E170, Architect, and MP kits produced relatively higher values compared with other kits. Conclusion All tested kits for $E_2$ measurement have satisfactory performance for clinical use. However, correlation between kits should be considered when test kits are to be changed, because some pairs of kits do not have correlations with each other.

  • PDF

An Analysis of Environmental Factors of Abandoned Paddy Wetlands as References and Changes in Land Cover Types in the Influence Area (묵논습지 환경요인 및 생태영향권 내 토지피복유형 변화 분석)

  • Park, MiOk;Kwon, SoonHyo;Back, SeungJun;Seo, JooYoung;Koo, BonHak
    • Journal of Wetlands Research
    • /
    • v.24 no.4
    • /
    • pp.331-344
    • /
    • 2022
  • This study analyzed the characteristics of the soil and hydrological environment of abandoned paddy wetlands examined the changes in land cover type in the ecological affect area, analyzed the environmental factors of abandoned paddy wetlands, and examined the changes in land cover type in the ecological impact area. The ecological environment characteristics of the reference abandoned paddy wetlands were investigated through literature research, environmental spatial information service, and preliminary exploration of the abandoned paddy wetlands, and the basic data for the restoration of abandoned paddy wetlands ware provided by examining the changes in land cover type in the ecological impact area for 40 years. Through this study, it will be possible to manage the rapidly increasing number of abandoned farmland to be converted into wetlands so that it can perform functions equivalent to or greater than that of natural wetlands. In particular, as we checked the clues that abandoned paddy wetlands could spread to surrounding ecological influences through land cover changes, the study sites are highly likely to be reference wetlands, and if the topography, soil, water circulation system, and carbon reduction performance are analyzed carefully, it will be possible to standardize the development process. In addition, through the change in land cover, clues were confirmed that the abandoned paddy wetlands could spread to the surrounding ecological affect areas. The land cover type in the ecological impact area, forests was mainly distributed, but generally decreased rapidly in the last 10-20 years, and forests were changing from coniferous forests to broad-leaved forests, mixed forests, or grassland. It has not yet been fully called to the wetland, and it is found that it has maintained the form of barren or grassland, and as can be seen in the case of natural wetlands after more than 30 years after abandoned, it is expected that the transition will gradually proceed to wetlands that are structurally and functionally similar to natural wetlands.

A study on Broad Quantification Calibration to various isotopes for Quantitative Analysis and its SUVs assessment in SPECT/CT (SPECT/CT 장비에서 정량분석을 위한 핵종 별 Broad Quantification Calibration 시행 및 SUV 평가를 위한 팬텀 실험에 관한 연구)

  • Hyun Soo, Ko;Jae Min, Choi;Soon Ki, Park
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.26 no.2
    • /
    • pp.20-31
    • /
    • 2022
  • Purpose Broad Quantification Calibration(B.Q.C) is the procedure for Quantitative Analysis to measure Standard Uptake Value(SUV) in SPECT/CT scanner. B.Q.C was performed with Tc-99m, I-123, I-131, Lu-177 respectively and then we acquired the phantom images whether the SUVs were measured accurately. Because there is no standard for SUV test in SPECT, we used ACR Esser PET phantom alternatively. The purpose of this study was to lay the groundwork for Quantitative Analysis with various isotopes in SPECT/CT scanner. Materials and Methods Siemens SPECT/CT Symbia Intevo 16 and Intevo Bold were used for this study. The procedure of B.Q.C has two steps; first is point source Sensitivity Cal. and second is Volume Sensitivity Cal. to calculate Volume Sensitivity Factor(VSF) using cylinder phantom. To verify SUV, we acquired the images with ACR Esser PET phantom and then we measured SUVmean on background and SUVmax on hot vials(25, 16, 12, 8 mm). SPSS was used to analyze the difference in the SUV between Intevo 16 and Intevo Bold by Mann-Whitney test. Results The results of Sensitivity(CPS/MBq) and VSF were in Detector 1, 2 of four isotopes (Intevo 16 D1 sensitivity/D2 sensitivity/VSF and Intevo Bold) 87.7/88.6/1.08, 91.9/91.2/1.07 on Tc-99m, 79.9/81.9/0.98, 89.4/89.4/0.98 on I-123, 124.8/128.9/0.69, 130.9, 126.8/0.71, on I-131, 8.7/8.9/1.02, 9.1/8.9/1.00 on Lu-177 respectively. The results of SUV test with ACR Esser PET phantom were (Intevo 16 BKG SUVmean/25mm SUVmax/16mm/12mm/8mm and Intevo Bold) 1.03/2.95/2.41/1.96/1.84, 1.03/2.91/2.38/1.87/1.82 on Tc-99m, 0.97/2.91/2.33/1.68/1.45, 1.00/2.80/2.23/1.57/1.32 on I-123, 0.96/1.61/1.13/1.02/0.69, 0.94/1.54/1.08/0.98/ 0.66 on I-131, 1.00/6.34/4.67/2.96/2.28, 1.01/6.21/4.49/2.86/2.21 on Lu-177. And there was no statistically significant difference of SUV between Intevo 16 and Intevo Bold(p>0.05). Conclusion Only Qualitative Analysis was possible with gamma camera in the past. On the other hand, it's possible to acquire not only anatomic localization, 3D tomography but also Quantitative Analysis with SUV measurements in SPECT/CT scanner. We could lay the groundwork for Quantitative Analysis with various isotopes; Tc-99m, I-123, I-131, Lu-177 by carrying out B.Q.C and could verify the SUV measurement with ACR phantom. It needs periodic calibration to maintain for precision of Quantitative evaluation. As a result, we can provide Quantitative Analysis on follow up scan with the SPECT/CT exams and evaluate the therapeutic response in theranosis.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Development of a complex failure prediction system using Hierarchical Attention Network (Hierarchical Attention Network를 이용한 복합 장애 발생 예측 시스템 개발)

  • Park, Youngchan;An, Sangjun;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.4
    • /
    • pp.127-148
    • /
    • 2020
  • The data center is a physical environment facility for accommodating computer systems and related components, and is an essential foundation technology for next-generation core industries such as big data, smart factories, wearables, and smart homes. In particular, with the growth of cloud computing, the proportional expansion of the data center infrastructure is inevitable. Monitoring the health of these data center facilities is a way to maintain and manage the system and prevent failure. If a failure occurs in some elements of the facility, it may affect not only the relevant equipment but also other connected equipment, and may cause enormous damage. In particular, IT facilities are irregular due to interdependence and it is difficult to know the cause. In the previous study predicting failure in data center, failure was predicted by looking at a single server as a single state without assuming that the devices were mixed. Therefore, in this study, data center failures were classified into failures occurring inside the server (Outage A) and failures occurring outside the server (Outage B), and focused on analyzing complex failures occurring within the server. Server external failures include power, cooling, user errors, etc. Since such failures can be prevented in the early stages of data center facility construction, various solutions are being developed. On the other hand, the cause of the failure occurring in the server is difficult to determine, and adequate prevention has not yet been achieved. In particular, this is the reason why server failures do not occur singularly, cause other server failures, or receive something that causes failures from other servers. In other words, while the existing studies assumed that it was a single server that did not affect the servers and analyzed the failure, in this study, the failure occurred on the assumption that it had an effect between servers. In order to define the complex failure situation in the data center, failure history data for each equipment existing in the data center was used. There are four major failures considered in this study: Network Node Down, Server Down, Windows Activation Services Down, and Database Management System Service Down. The failures that occur for each device are sorted in chronological order, and when a failure occurs in a specific equipment, if a failure occurs in a specific equipment within 5 minutes from the time of occurrence, it is defined that the failure occurs simultaneously. After configuring the sequence for the devices that have failed at the same time, 5 devices that frequently occur simultaneously within the configured sequence were selected, and the case where the selected devices failed at the same time was confirmed through visualization. Since the server resource information collected for failure analysis is in units of time series and has flow, we used Long Short-term Memory (LSTM), a deep learning algorithm that can predict the next state through the previous state. In addition, unlike a single server, the Hierarchical Attention Network deep learning model structure was used in consideration of the fact that the level of multiple failures for each server is different. This algorithm is a method of increasing the prediction accuracy by giving weight to the server as the impact on the failure increases. The study began with defining the type of failure and selecting the analysis target. In the first experiment, the same collected data was assumed as a single server state and a multiple server state, and compared and analyzed. The second experiment improved the prediction accuracy in the case of a complex server by optimizing each server threshold. In the first experiment, which assumed each of a single server and multiple servers, in the case of a single server, it was predicted that three of the five servers did not have a failure even though the actual failure occurred. However, assuming multiple servers, all five servers were predicted to have failed. As a result of the experiment, the hypothesis that there is an effect between servers is proven. As a result of this study, it was confirmed that the prediction performance was superior when the multiple servers were assumed than when the single server was assumed. In particular, applying the Hierarchical Attention Network algorithm, assuming that the effects of each server will be different, played a role in improving the analysis effect. In addition, by applying a different threshold for each server, the prediction accuracy could be improved. This study showed that failures that are difficult to determine the cause can be predicted through historical data, and a model that can predict failures occurring in servers in data centers is presented. It is expected that the occurrence of disability can be prevented in advance using the results of this study.