• Title/Summary/Keyword: Sensor based

Search Result 10,254, Processing Time 0.038 seconds

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

A Study on the Calculation of Evapotranspiration Crop Coefficient in the Cheongmi-cheon Paddy Field (청미천 논지에서의 증발산량 작물계수 산정에 관한 연구)

  • Kim, Kiyoung;Lee, Yongjun;Jung, Sungwon;Lee, Yeongil
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.6_1
    • /
    • pp.883-893
    • /
    • 2019
  • In this study, crop coefficients were calculated in two different methods and the results were evaluated. In the first method, appropriateness of GLDAS-based evapotranspiration was evaluated by comparing it with observed data of Cheongmi-cheon (CMC) Flux tower. Then, crop coefficient was calculated by dividing actual evapotranspiration with potential evapotranspiration that derived from GLDAS. In the second method, crop coefficient was determined by using MLR (Multiple Linear Regression) analysis with vegetation index (NDVI, EVI, LAI and SAVI) derived from MODIS and in-situ soil moisture data observed in CMC, In comparison of two crop coefficients over the entire period, for each crop coefficient GLDAS Kc and SM&VI Kc, shows the mean value of 0.412 and 0.378, the bias of 0.031 and -0.004, the RMSE of 0.092 and 0.069, and the Index of Agree (IOA) of 0.944 and 0.958. Overall, both methods showed similar patterns with observed evapotranspiration, but the SM&VI-based method showed better results. One step further, the statistical evaluation of GLDAS Kc and SM&VI Kc in specific period was performed according to the growth phase of the crop. The result shows that GLDAS Kc was better in the early and mid-phase of the crop growth, and SM&VI Kc was better in the latter phase. This result seems to be because of reduced accuracy of MODIS sensors due to yellow dust in spring and rain clouds in summer. If the observational accuracy of the MODIS sensor is improved in subsequent study, the accuracy of the SM&VI-based method will also be improved and this method will be applicable in determining the crop coefficient of unmeasured basin or predicting the crop coefficient of a certain area.

Analysis of the Effect of Corner Points and Image Resolution in a Mechanical Test Combining Digital Image Processing and Mesh-free Method (디지털 이미지 처리와 강형식 기반의 무요소법을 융합한 시험법의 모서리 점과 이미지 해상도의 영향 분석)

  • Junwon Park;Yeon-Suk Jeong;Young-Cheol Yoon
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.37 no.1
    • /
    • pp.67-76
    • /
    • 2024
  • In this paper, we present a DIP-MLS testing method that combines digital image processing with a rigid body-based MLS differencing approach to measure mechanical variables and analyze the impact of target location and image resolution. This method assesses the displacement of the target attached to the sample through digital image processing and allocates this displacement to the node displacement of the MLS differencing method, which solely employs nodes to calculate mechanical variables such as stress and strain of the studied object. We propose an effective method to measure the displacement of the target's center of gravity using digital image processing. The calculation of mechanical variables through the MLS differencing method, incorporating image-based target displacement, facilitates easy computation of mechanical variables at arbitrary positions without constraints from meshes or grids. This is achieved by acquiring the accurate displacement history of the test specimen and utilizing the displacement of tracking points with low rigidity. The developed testing method was validated by comparing the measurement results of the sensor with those of the DIP-MLS testing method in a three-point bending test of a rubber beam. Additionally, numerical analysis results simulated only by the MLS differencing method were compared, confirming that the developed method accurately reproduces the actual test and shows good agreement with numerical analysis results before significant deformation. Furthermore, we analyzed the effects of boundary points by applying 46 tracking points, including corner points, to the DIP-MLS testing method. This was compared with using only the internal points of the target, determining the optimal image resolution for this testing method. Through this, we demonstrated that the developed method efficiently addresses the limitations of direct experiments or existing mesh-based simulations. It also suggests that digitalization of the experimental-simulation process is achievable to a considerable extent.

Information Privacy Concern in Context-Aware Personalized Services: Results of a Delphi Study

  • Lee, Yon-Nim;Kwon, Oh-Byung
    • Asia pacific journal of information systems
    • /
    • v.20 no.2
    • /
    • pp.63-86
    • /
    • 2010
  • Personalized services directly and indirectly acquire personal data, in part, to provide customers with higher-value services that are specifically context-relevant (such as place and time). Information technologies continue to mature and develop, providing greatly improved performance. Sensory networks and intelligent software can now obtain context data, and that is the cornerstone for providing personalized, context-specific services. Yet, the danger of overflowing personal information is increasing because the data retrieved by the sensors usually contains privacy information. Various technical characteristics of context-aware applications have more troubling implications for information privacy. In parallel with increasing use of context for service personalization, information privacy concerns have also increased such as an unrestricted availability of context information. Those privacy concerns are consistently regarded as a critical issue facing context-aware personalized service success. The entire field of information privacy is growing as an important area of research, with many new definitions and terminologies, because of a need for a better understanding of information privacy concepts. Especially, it requires that the factors of information privacy should be revised according to the characteristics of new technologies. However, previous information privacy factors of context-aware applications have at least two shortcomings. First, there has been little overview of the technology characteristics of context-aware computing. Existing studies have only focused on a small subset of the technical characteristics of context-aware computing. Therefore, there has not been a mutually exclusive set of factors that uniquely and completely describe information privacy on context-aware applications. Second, user survey has been widely used to identify factors of information privacy in most studies despite the limitation of users' knowledge and experiences about context-aware computing technology. To date, since context-aware services have not been widely deployed on a commercial scale yet, only very few people have prior experiences with context-aware personalized services. It is difficult to build users' knowledge about context-aware technology even by increasing their understanding in various ways: scenarios, pictures, flash animation, etc. Nevertheless, conducting a survey, assuming that the participants have sufficient experience or understanding about the technologies shown in the survey, may not be absolutely valid. Moreover, some surveys are based solely on simplifying and hence unrealistic assumptions (e.g., they only consider location information as a context data). A better understanding of information privacy concern in context-aware personalized services is highly needed. Hence, the purpose of this paper is to identify a generic set of factors for elemental information privacy concern in context-aware personalized services and to develop a rank-order list of information privacy concern factors. We consider overall technology characteristics to establish a mutually exclusive set of factors. A Delphi survey, a rigorous data collection method, was deployed to obtain a reliable opinion from the experts and to produce a rank-order list. It, therefore, lends itself well to obtaining a set of universal factors of information privacy concern and its priority. An international panel of researchers and practitioners who have the expertise in privacy and context-aware system fields were involved in our research. Delphi rounds formatting will faithfully follow the procedure for the Delphi study proposed by Okoli and Pawlowski. This will involve three general rounds: (1) brainstorming for important factors; (2) narrowing down the original list to the most important ones; and (3) ranking the list of important factors. For this round only, experts were treated as individuals, not panels. Adapted from Okoli and Pawlowski, we outlined the process of administrating the study. We performed three rounds. In the first and second rounds of the Delphi questionnaire, we gathered a set of exclusive factors for information privacy concern in context-aware personalized services. The respondents were asked to provide at least five main factors for the most appropriate understanding of the information privacy concern in the first round. To do so, some of the main factors found in the literature were presented to the participants. The second round of the questionnaire discussed the main factor provided in the first round, fleshed out with relevant sub-factors. Respondents were then requested to evaluate each sub factor's suitability against the corresponding main factors to determine the final sub-factors from the candidate factors. The sub-factors were found from the literature survey. Final factors selected by over 50% of experts. In the third round, a list of factors with corresponding questions was provided, and the respondents were requested to assess the importance of each main factor and its corresponding sub factors. Finally, we calculated the mean rank of each item to make a final result. While analyzing the data, we focused on group consensus rather than individual insistence. To do so, a concordance analysis, which measures the consistency of the experts' responses over successive rounds of the Delphi, was adopted during the survey process. As a result, experts reported that context data collection and high identifiable level of identical data are the most important factor in the main factors and sub factors, respectively. Additional important sub-factors included diverse types of context data collected, tracking and recording functionalities, and embedded and disappeared sensor devices. The average score of each factor is very useful for future context-aware personalized service development in the view of the information privacy. The final factors have the following differences comparing to those proposed in other studies. First, the concern factors differ from existing studies, which are based on privacy issues that may occur during the lifecycle of acquired user information. However, our study helped to clarify these sometimes vague issues by determining which privacy concern issues are viable based on specific technical characteristics in context-aware personalized services. Since a context-aware service differs in its technical characteristics compared to other services, we selected specific characteristics that had a higher potential to increase user's privacy concerns. Secondly, this study considered privacy issues in terms of service delivery and display that were almost overlooked in existing studies by introducing IPOS as the factor division. Lastly, in each factor, it correlated the level of importance with professionals' opinions as to what extent users have privacy concerns. The reason that it did not select the traditional method questionnaire at that time is that context-aware personalized service considered the absolute lack in understanding and experience of users with new technology. For understanding users' privacy concerns, professionals in the Delphi questionnaire process selected context data collection, tracking and recording, and sensory network as the most important factors among technological characteristics of context-aware personalized services. In the creation of a context-aware personalized services, this study demonstrates the importance and relevance of determining an optimal methodology, and which technologies and in what sequence are needed, to acquire what types of users' context information. Most studies focus on which services and systems should be provided and developed by utilizing context information on the supposition, along with the development of context-aware technology. However, the results in this study show that, in terms of users' privacy, it is necessary to pay greater attention to the activities that acquire context information. To inspect the results in the evaluation of sub factor, additional studies would be necessary for approaches on reducing users' privacy concerns toward technological characteristics such as highly identifiable level of identical data, diverse types of context data collected, tracking and recording functionality, embedded and disappearing sensor devices. The factor ranked the next highest level of importance after input is a context-aware service delivery that is related to output. The results show that delivery and display showing services to users in a context-aware personalized services toward the anywhere-anytime-any device concept have been regarded as even more important than in previous computing environment. Considering the concern factors to develop context aware personalized services will help to increase service success rate and hopefully user acceptance for those services. Our future work will be to adopt these factors for qualifying context aware service development projects such as u-city development projects in terms of service quality and hence user acceptance.

Sea Surface pCO2 and Its Variability in the Ulleung Basin, East Sea Constrained by a Neural Network Model (신경망 모델로 구성한 동해 울릉분지 표층 이산화탄소 분압과 변동성)

  • PARK, SOYEONA;LEE, TONGSUP;JO, YOUNG-HEON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.21 no.1
    • /
    • pp.1-10
    • /
    • 2016
  • Currently available surface seawater partial pressure carbon dioxide ($pCO_2$) data sets in the East Sea are not enough to quantify statistically the carbon dioxide flux through the air-sea interface. To complement the scarcity of the $pCO_2$ measurements, we construct a neural network (NN) model based on satellite data to map $pCO_2$ for the areas, which were not observed. The NN model is constructed for the Ulleung Basin, where $pCO_2$ data are best available, to map and estimate the variability of $pCO_2$ based on in situ $pCO_2$ for the years from 2003 to 2012, and the sea surface temperature (SST) and chlorophyll data from the MODIS (Moderate-resolution Imaging Spectroradiometer) sensor of the Aqua satellite along with geographic information. The NN model was trained to achieve higher than 95% of a correlation between in situ and predicted $pCO_2$ values. The RMSE (root mean square error) of the NN model output was $19.2{\mu}atm$ and much less than the variability of in situ $pCO_2$. The variability of $pCO_2$ with respect to SST and chlorophyll shows a strong negative correlation with SST than chlorophyll. As SST decreases the variability of $pCO_2$ increases. When SST is lower than $15^{\circ}C$, $pCO_2$ variability is clearly affected by both SST and chlorophyll. In contrast when SST is higher than $15^{\circ}C$, the variability of $pCO_2$ is less sensitive to changes in SST and chlorophyll. The mean rate of the annual $pCO_2$ increase estimated by the NN model output in the Ulleung Basin is $0.8{\mu}atm\;yr^{-1}$ from 2003 to 2014. As NN model can successfully map $pCO_2$ data for the whole study area with a higher resolution and less RMSE compared to the previous studies, the NN model can be a potentially useful tool for the understanding of the carbon cycle in the East Sea, where accessibility is limited by the international affairs.

Detection of Irrigation Timing and the Mapping of Paddy Cover in Korea Using MODIS Images Data (MODIS 영상자료를 이용한 관개시기 탐지와 논 피복지도 제작)

  • Jeong, Seung-Taek;Jang, Keun-Chang;Hong, Seok-Yeong;Kang, Sin-Kyu
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.13 no.2
    • /
    • pp.69-78
    • /
    • 2011
  • Rice is one of the world's staple foods. Paddy rice fields have unique biophysical characteristics that the rice is grown on flooded soils unlike other crops. Information on the spatial distribution of paddy fields and the timing of irrigation are of importance to determine hydrological balance and efficiency of water resource management. In this paper, we detected the timing of irrigation and spatial distribution of paddy fields using the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard the NASA EOS Aqua satellite. The timing of irrigation was detected by the combined use of MODIS-based vegetation index and Land Surface Water Index (LSWI). The detected timing of irrigation showed good agreement with field observations from two flux sites in Korea and Japan. Based on the irrigation detection, a land cover map of paddy fields was generated with subsidiary information on seasonal patterns of MODIS enhanced vegetation index (EVI). When the MODISbased paddy field map was compared with a land cover map from the Ministry of Environment, Korea, it overestimated the regions with large paddies but underestimated those with small and fragmented paddies. Potential reasons for such spatial discrepancies may be attributed to coarse pixel resolution (500 m) of MODIS images, uncertainty in parameterization of threshold values for discarding forest and water pixels, and the application of LSWI threshold value developed for paddy fields in China. Nevertheless, this study showed that an improved utilization of seasonal patterns of MODIS vegetation and water-related indices could be applied in water resource management and enhanced estimation of evapotranspiration from paddy fields.

Estimation of Fresh Weight, Dry Weight, and Leaf Area Index of Soybean Plant using Multispectral Camera Mounted on Rotor-wing UAV (회전익 무인기에 탑재된 다중분광 센서를 이용한 콩의 생체중, 건물중, 엽면적 지수 추정)

  • Jang, Si-Hyeong;Ryu, Chan-Seok;Kang, Ye-Seong;Jun, Sae-Rom;Park, Jun-Woo;Song, Hye-Young;Kang, Kyeong-Suk;Kang, Dong-Woo;Zou, Kunyan;Jun, Tae-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.4
    • /
    • pp.327-336
    • /
    • 2019
  • Soybean is one of the most important crops of which the grains contain high protein content and has been consumed in various forms of food. Soybean plants are generally cultivated on the field and their yield and quality are strongly affected by climate change. Recently, the abnormal climate conditions, including heat wave and heavy rainfall, frequently occurs which would increase the risk of the farm management. The real-time assessment techniques for quality and growth of soybean would reduce the losses of the crop in terms of quantity and quality. The objective of this work was to develop a simple model to estimate the growth of soybean plant using a multispectral sensor mounted on a rotor-wing unmanned aerial vehicle(UAV). The soybean growth model was developed by using simple linear regression analysis with three phenotypic data (fresh weight, dry weight, leaf area index) and two types of vegetation indices (VIs). It was found that the accuracy and precision of LAI model using GNDVI (R2= 0.789, RMSE=0.73 ㎡/㎡, RE=34.91%) was greater than those of the model using NDVI (R2= 0.587, RMSE=1.01 ㎡/㎡, RE=48.98%). The accuracy and precision based on the simple ratio indices were better than those based on the normalized vegetation indices, such as RRVI (R2= 0.760, RMSE=0.78 ㎡/㎡, RE=37.26%) and GRVI (R2= 0.828, RMSE=0.66 ㎡/㎡, RE=31.59%). The outcome of this study could aid the production of soybeans with high and uniform quality when a variable rate fertilization system is introduced to cope with the adverse climate conditions.

Novel Method for Urinary 1-Hydroxypyrene Measurement Using Molecular Imprinting (분자주형을 이용한 요중 1-hydroxypyrene의 측정 방법 개발)

  • Yim, Dong-Hyuk;Moon, Sun-In;Choi, Young-Sook;Park, Hee-Jin;Kim, Dae-Seon;Yu, Seung-Do;Lee, Chul-Ho;Kim, Yong-Dae;Kim, Heon
    • Journal of Life Science
    • /
    • v.21 no.4
    • /
    • pp.549-553
    • /
    • 2011
  • This study was performed to determine whether or not urinary 1-hydroxypyrene (1-OHP) levels can be accurately detected by our 1-OHP-detecting $TiO_2$-Bead-HPLC assay that we developed based on the molecular imprinting method. Our method showed a variation coefficient of 4.97% and a between-day variation coefficient of 4.43%, suggesting that this may be a very stable method. In addition, the recovery rate of 1-OHP from a mixture of 1-OHP and similar substances using our $TiO_2$-Bead-HPLC method was estimated to be 105.6%. The correlation coefficient between the conventional enzyme-HPLC method and this new method was 0.74 (p<0.01) when the urine samples were tested. Based on this result, it is conceivable that our method could be a useful technique for measuring urinary 1-OHP levels. Moreover, our method has some advantages of being easier and less expensive than the conventional method. The results of this study suggest that our method can facilitate the development of a urine 1-OHP sensor using $TiO_2$-coating beads and that development of beads by molecular imprinting can be applied to analysis of chemicals other than 1-OHP.

An Integrated Model based on Genetic Algorithms for Implementing Cost-Effective Intelligent Intrusion Detection Systems (비용효율적 지능형 침입탐지시스템 구현을 위한 유전자 알고리즘 기반 통합 모형)

  • Lee, Hyeon-Uk;Kim, Ji-Hun;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.1
    • /
    • pp.125-141
    • /
    • 2012
  • These days, the malicious attacks and hacks on the networked systems are dramatically increasing, and the patterns of them are changing rapidly. Consequently, it becomes more important to appropriately handle these malicious attacks and hacks, and there exist sufficient interests and demand in effective network security systems just like intrusion detection systems. Intrusion detection systems are the network security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. Conventional intrusion detection systems have generally been designed using the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. However, they cannot handle new or unknown patterns of the network attacks, although they perform very well under the normal situation. As a result, recent studies on intrusion detection systems use artificial intelligence techniques, which can proactively respond to the unknown threats. For a long time, researchers have adopted and tested various kinds of artificial intelligence techniques such as artificial neural networks, decision trees, and support vector machines to detect intrusions on the network. However, most of them have just applied these techniques singularly, even though combining the techniques may lead to better detection. With this reason, we propose a new integrated model for intrusion detection. Our model is designed to combine prediction results of four different binary classification models-logistic regression (LOGIT), decision trees (DT), artificial neural networks (ANN), and support vector machines (SVM), which may be complementary to each other. As a tool for finding optimal combining weights, genetic algorithms (GA) are used. Our proposed model is designed to be built in two steps. At the first step, the optimal integration model whose prediction error (i.e. erroneous classification rate) is the least is generated. After that, in the second step, it explores the optimal classification threshold for determining intrusions, which minimizes the total misclassification cost. To calculate the total misclassification cost of intrusion detection system, we need to understand its asymmetric error cost scheme. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, total misclassification cost is more affected by FNE rather than FPE. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 10,000 samples from them by using random sampling method. Also, we compared the results from our model with the results from single techniques to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell R4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on GA outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that the proposed model outperformed all the other comparative models in the total misclassification cost perspective. Consequently, it is expected that our study may contribute to build cost-effective intelligent intrusion detection systems.

Development of a Storage Level and Capacity Monitoring and Forecasting Techniques in Yongdam Dam Basin Using High Resolution Satellite Image (고해상도 위성자료를 이용한 용담댐 유역 저수위/저수량 모니터링 및 예측 기술 개발)

  • Yoon, Sunkwon;Lee, Seongkyu;Park, Kyungwon;Jang, Sangmin;Rhee, Jinyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_1
    • /
    • pp.1041-1053
    • /
    • 2018
  • In this study, a real-time storage level and capacity monitoring and forecasting system for Yongdam Dam watershed was developed using high resolution satellite image. The drought indices such as Standardized Precipitation Index (SPI) from satellite data were used for storage level monitoring in case of drought. Moreover, to predict storage volume we used a statistical method based on Principle Component Analysis (PCA) of Singular Spectrum Analysis (SSA). According to this study, correlation coefficient between storage level and SPI (3) was highly calculated with CC=0.78, and the monitoring and predictability of storage level was diagnosed using the drought index calculated from satellite data. As a result of analysis of principal component analysis by SSA, correlation between SPI (3) and each Reconstructed Components (RCs) data were highly correlated with CC=0.87 to 0.99. And also, the correlations of RC data with Normalized Water Surface Level (N-W.S.L.) were confirmed that has highly correlated with CC=0.83 to 0.97. In terms of high resolution satellite image we developed a water detection algorithm by applying an exponential method to monitor the change of storage level by using Multi-Spectral Instrument (MSI) sensor of Sentinel-2 satellite. The materials of satellite image for water surface area detection in Yongdam dam watershed was considered from 2016 to 2018, respectively. Based on this, we proposed the possibility of real-time drought monitoring system using high resolution water surface area detection by Sentinel-2 satellite image. The results of this study can be applied to estimate of the reservoir volume calculated from various satellite observations, which can be used for monitoring and estimating hydrological droughts in an unmeasured area.