• Title/Summary/Keyword: 시스템 성능예측

Search Result 2,108, Processing Time 0.03 seconds

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

Adaptive RFID anti-collision scheme using collision information and m-bit identification (충돌 정보와 m-bit인식을 이용한 적응형 RFID 충돌 방지 기법)

  • Lee, Je-Yul;Shin, Jongmin;Yang, Dongmin
    • Journal of Internet Computing and Services
    • /
    • v.14 no.5
    • /
    • pp.1-10
    • /
    • 2013
  • RFID(Radio Frequency Identification) system is non-contact identification technology. A basic RFID system consists of a reader, and a set of tags. RFID tags can be divided into active and passive tags. Active tags with power source allows their own operation execution and passive tags are small and low-cost. So passive tags are more suitable for distribution industry than active tags. A reader processes the information receiving from tags. RFID system achieves a fast identification of multiple tags using radio frequency. RFID systems has been applied into a variety of fields such as distribution, logistics, transportation, inventory management, access control, finance and etc. To encourage the introduction of RFID systems, several problems (price, size, power consumption, security) should be resolved. In this paper, we proposed an algorithm to significantly alleviate the collision problem caused by simultaneous responses of multiple tags. In the RFID systems, in anti-collision schemes, there are three methods: probabilistic, deterministic, and hybrid. In this paper, we introduce ALOHA-based protocol as a probabilistic method, and Tree-based protocol as a deterministic one. In Aloha-based protocols, time is divided into multiple slots. Tags randomly select their own IDs and transmit it. But Aloha-based protocol cannot guarantee that all tags are identified because they are probabilistic methods. In contrast, Tree-based protocols guarantee that a reader identifies all tags within the transmission range of the reader. In Tree-based protocols, a reader sends a query, and tags respond it with their own IDs. When a reader sends a query and two or more tags respond, a collision occurs. Then the reader makes and sends a new query. Frequent collisions make the identification performance degrade. Therefore, to identify tags quickly, it is necessary to reduce collisions efficiently. Each RFID tag has an ID of 96bit EPC(Electronic Product Code). The tags in a company or manufacturer have similar tag IDs with the same prefix. Unnecessary collisions occur while identifying multiple tags using Query Tree protocol. It results in growth of query-responses and idle time, which the identification time significantly increases. To solve this problem, Collision Tree protocol and M-ary Query Tree protocol have been proposed. However, in Collision Tree protocol and Query Tree protocol, only one bit is identified during one query-response. And, when similar tag IDs exist, M-ary Query Tree Protocol generates unnecessary query-responses. In this paper, we propose Adaptive M-ary Query Tree protocol that improves the identification performance using m-bit recognition, collision information of tag IDs, and prediction technique. We compare our proposed scheme with other Tree-based protocols under the same conditions. We show that our proposed scheme outperforms others in terms of identification time and identification efficiency.

Development of a Small Animal Positron Emission Tomography Using Dual-layer Phoswich Detector and Position Sensitive Photomultiplier Tube: Preliminary Results (두층 섬광결정과 위치민감형광전자증배관을 이용한 소동물 양전자방출단층촬영기 개발: 기초실험 결과)

  • Jeong, Myung-Hwan;Choi, Yong;Chung, Yong-Hyun;Song, Tae-Yong;Jung, Jin-Ho;Hong, Key-Jo;Min, Byung-Jun;Choe, Yearn-Seong;Lee, Kyung-Han;Kim, Byung-Tae
    • The Korean Journal of Nuclear Medicine
    • /
    • v.38 no.5
    • /
    • pp.338-343
    • /
    • 2004
  • Purpose: The purpose of this study was to develop a small animal PET using dual layer phoswich detector to minimize parallax error that degrades spatial resolution at the outer part of field-of-view (FOV). Materials and Methods: A simulation tool GATE (Geant4 Application for Tomographic Emission) was used to derive optimal parameters of small PET, and PET was developed employing the parameters. Lutetium Oxyorthosilicate (LSO) and Lutetium-Yttrium Aluminate-Perovskite(LuYAP) was used to construct dual layer phoswitch crystal. $8{\times}8$ arrays of LSO and LuYAP pixels, $2mm{\times}2mm{\times}8mm$ in size, were coupled to a 64-channel position sensitive photomultiplier tube. The system consisted of 16 detector modules arranged to one ring configuration (ring inner diameter 10 cm, FOV of 8 cm). The data from phoswich detector modules were fed into an ADC board in the data acquisition and preprocessing PC via sockets, decoder block, FPGA board, and bus board. These were linked to the master PC that stored the events data on hard disk. Results: In a preliminary test of the system, reconstructed images were obtained by using a pair of detectors and sensitivity and spatial resolution were measured. Spatial resolution was 2.3 mm FWHM and sensitivity was 10.9 $cps/{\mu}Ci$ at the center of FOV. Conclusion: The radioactivity distribution patterns were accurately represented in sinograms and images obtained by PET with a pair of detectors. These preliminary results indicate that it is promising to develop a high performance small animal PET.

Estimation of Fresh Weight and Leaf Area Index of Soybean (Glycine max) Using Multi-year Spectral Data (다년도 분광 데이터를 이용한 콩의 생체중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Park, Jun-Woo;Kim, Tae-Yang;Kang, Kyung-Suk;Park, Min-Jun;Baek, Hyun-Chan;Park, Yu-hyeon;Kang, Dong-woo;Zou, Kunyan;Kim, Min-Cheol;Kwon, Yeon-Ju;Han, Seung-ah;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.4
    • /
    • pp.329-339
    • /
    • 2021
  • Soybeans (Glycine max), one of major upland crops, require precise management of environmental conditions, such as temperature, water, and soil, during cultivation since they are sensitive to environmental changes. Application of spectral technologies that measure the physiological state of crops remotely has great potential for improving quality and productivity of the soybean by estimating yields, physiological stresses, and diseases. In this study, we developed and validated a soybean growth prediction model using multispectral imagery. We conducted a linear regression analysis between vegetation indices and soybean growth data (fresh weight and LAI) obtained at Miryang fields. The linear regression model was validated at Goesan fields. It was found that the model based on green ratio vegetation index (GRVI) had the greatest performance in prediction of fresh weight at the calibration stage (R2=0.74, RMSE=246 g/m2, RE=34.2%). In the validation stage, RMSE and RE of the model were 392 g/m2 and 32%, respectively. The errors of the model differed by cropping system, For example, RMSE and RE of model in single crop fields were 315 g/m2 and 26%, respectively. On the other hand, the model had greater values of RMSE (381 g/m2) and RE (31%) in double crop fields. As a result of developing models for predicting a fresh weight into two years (2018+2020) with similar accumulated temperature (AT) in three years and a single year (2019) that was different from that AT, the prediction performance of a single year model was better than a two years model. Consequently, compared with those models divided by AT and a three years model, RMSE of a single crop fields were improved by about 29.1%. However, those of double crop fields decreased by about 19.6%. When environmental factors are used along with, spectral data, the reliability of soybean growth prediction can be achieved various environmental conditions.

Real-time Nutrient Monitoring of Hydroponic Solutions Using an Ion-selective Electrode-based Embedded System (ISE 기반의 임베디드 시스템을 이용한 실시간 수경재배 양액 모니터링)

  • Han, Hee-Jo;Kim, Hak-Jin;Jung, Dae-Hyun;Cho, Woo-Jae;Cho, Yeong-Yeol;Lee, Gong-In
    • Journal of Bio-Environment Control
    • /
    • v.29 no.2
    • /
    • pp.141-152
    • /
    • 2020
  • The rapid on-site measurement of hydroponic nutrients allows for the more efficient use of crop fertilizers. This paper reports on the development of an embedded on-site system consisting of multiple ion-selective electrodes (ISEs) for the real-time measurement of the concentrations of macronutrients in hydroponic solutions. The system included a combination of PVC ISEs for the detection of NO3, K, and Ca ions, a cobalt-electrode for the detection of H2PO4, a double-junction reference electrode, a solution container, and a sampling system consisting of pumps and valves. An Arduino Due board was used to collect data and to control the volume of the sample. Prior to the measurement of each sample, a two-point normalization method was employed to adjust the sensitivity followed by an offset to minimize potential drift that might occur during continuous measurement. The predictive capabilities of the NO3 and K ISEs based on PVC membranes were satisfactory, producing results that were in close agreement with the results of standard analyzers (R2 = 0.99). Though the Ca ISE fabricated with Ca ionophore II underestimated the Ca concentration by an average of 55%, the strong linear relationship (R2 > 0.84) makes it possible for the embedded system to be used in hydroponic NO3, K, and Ca sensing. The cobalt-rod-based phosphate electrodes exhibited a relatively high error of 24.7±9.26% in the phosphate concentration range of 45 to 155 mg/L compared to standard methods due to inconsistent signal readings between replicates, illustrating the need for further research on the signal conditioning of cobalt electrodes to improve their predictive ability in hydroponic P sensing.

Developement of Planar Active Array Antenna System for Radar (평면형 능동 위상 배열 레이더용 안테나 시스템 개발)

  • Chon, Sang-Mi;Na, Hyung-Gi;Kim, Soo-Bum;Lee, Jeong-Won;Kim, Dong-Yoon;Kim, Seon-Joo;Ahn, Chang-Soo;Lee, Chang-Hee
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.20 no.12
    • /
    • pp.1340-1350
    • /
    • 2009
  • The design and implementation of planar Active Phased Array Antenna System are described in this paper. This Antenna system operates at X-band with its bandwidth 10 % and dual polarization is realized using dual slot feeding microstrip patch antenna and SPDT(Single Pole Double Through) switch. Array Structure is $16\times16$ triangular lattice structure and each array is composed of TR(Transmit & Receive) module with more than 40 dBm power. Each TR module includes digital attenuator and phase shifter so that antenna beam can be electronically steered over a scan angle$({\pm}60^{\circ})$. Measurement of antenna pattern is conducted using a near field chamber and the results coincide with the expected beam pattern. From these results, it can be convinced that this antenna can be used with control of beam steering and beam shaping.

Estimation of Small Hydropower Resources by Hydrologic Analysis of Han-River Standard Basin (한강수계 표준유역의 수문특성분석을 통한 소수력 자원량 산정)

  • Seo, Sung Ho;Oh, Kuk Ryul;Park, Wan Soon;Jeong, Sang Man
    • 한국방재학회:학술대회논문집
    • /
    • 2011.02a
    • /
    • pp.47-47
    • /
    • 2011
  • 에너지자원이 부족하여 에너지 해외의존도가 약 80% 이상인 우리나라의 특성상 에너지 해외의존도를 경감시키고 에너지부족 상황을 안정시키기 위하여 국내의 부존에너지를 최대한 활용하는 것이 필요하다. 또한 지구온난화에 대처하는 범세계적인 규제에 대비하기 위하여, 청정에너지를 적극 개발하여 에너지자립도를 향상시켜야 한다. 신재생에너지 중 하나인 소수력은 친환경적인 청정에너지 중 하나로 다른 대체 에너지원에 비해 높은 에너지 밀도를 가지고 있어 개발 가치가 큰 부존자원으로 평가되고 있다. 그리고 소수력은 여러측면의 사회적 환경적 이점으로 최근에는 선진국에서도 매우 큰 관심을 끌고 있으며, 에너지 자원이 빈약하여 대부분 석유수입에 의존하는 우리나라는 지역에너지로 소수력을 적극 개발하여야 한다. 소수력 부존량이 풍부한 우리나라는 1982년에 소수력 개발 활성화 방안이 공표되면서부터 정부주도 하에 소수력 발전소 건설에 관한 연구를 적극적으로 지원하게 되었다. 대수력과 비교하여 소수력의 장점으로는 비교적 짧은 계획 및 시공기간, 낮은 투자비용, 개인이나 기업을 통한 투자참여, 주위 인력이나 자재를 이용한 쉬운 설치, 적은 환경적인 피해 등이 있다. 이와 같이 청정에너지 중 하나인 소수력의 개발과 활용을 위하여 IT 기술을 접목한 다양한 응용시스템 구축이 진행되고 있다. 특히, 한국에너지기술연구원에서는 신재생에너지 개발 및 보급 확대를 목표로 2006년에 신재생에너지 자원지도시스템을 구축하였으며, 이를 웹상에서 제공하고 있다. 소수력 발전시설의 적극적인 활용을 위해서는 초기설계시 장기유출 특성분석을 통해 해당유역의 수자원을 최대로 활용하고, 지형적인 요소를 이용하여 전기의 생산이 최대가 되도록 하는 최적설계가 이루어 져야 한다. 따라서 본 연구에서는 소수력 발전시설의 최적설계를 위해 한강수계 258개 표준유역 중 섬강합류점에 대하여 자원지도를 활용하여 연평균유량을 추정한 후 소수력 자원량을 산정하였고, 그 결과로 시설용량과 연간전기생산량은 각각 1,633kW, 6,224MWh로 산정되었다. 또한 유출량의 미계측 유역에서의 소수력 발전성능을 예측하기 위한 방법으로 Weibull 분포의 특성화 방법을 선택하여 그 적용성을 검토하였다. 섬강합류점 표준유역 내에 위치하고 있는 목계관측소, 앙성관측소에서의 10개년(1999~2008) 강우자료를 바탕으로 유황곡선을 작성하여 상관관계분석을 실시한 결과 목계관측소에서 0.994701, 앙성관측소에서 0.992616으로 관측치와 계산값이 상당히 유사한 것으로 나타났다.

  • PDF