• Title/Summary/Keyword: Temporal data modeling

Search Result 173, Processing Time 0.028 seconds

Surface Deformation Measurement of the 2020 Mw 6.4 Petrinja, Croatia Earthquake Using Sentinel-1 SAR Data

  • Achmad, Arief Rizqiyanto;Lee, Chang-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.1
    • /
    • pp.139-151
    • /
    • 2021
  • By the end of December 2020, an earthquake with Mw about 6.4 hit Sisak-Moslavina County, Croatia. The town of Petrinja was the most affected region with major power outage and many buildings collapsed. The damage also affected neighbor countries such as Bosnia and Herzegovina and Slovenia. As a light of this devastating event, a deformation map due to this earthquake could be generated by using remote sensing imagery from Sentinel-1 SAR data. InSAR could be used as deformation map but still affected with noise factor that could problematize the exact deformation value for further research. Thus in this study, 17 SAR data from Sentinel-1 satellite is used in order to generate the multi-temporal interferometry utilize Stanford Method for Persistent Scatterers (StaMPS). Mean deformation map that has been compensated from error factors such as atmospheric, topographic, temporal, and baseline errors are generated. Okada model then applied to the mean deformation result to generate the modeled earthquake, resulting the deformation is mostly dominated by strike-slip with 3 meter deformation as right lateral strike-slip. The Okada sources are having 11.63 km in length, 2.45 km in width, and 5.46 km in depth with the dip angle are about 84.47° and strike angle are about 142.88° from the north direction. The results from this modeling can be used as learning material to understand the seismic activity in the latest 2020 Petrinja, Croatia Earthquake.

Adaptive Partitioning for Efficient Query Support

  • Yun, Hong-Won
    • Journal of information and communication convergence engineering
    • /
    • v.5 no.4
    • /
    • pp.369-373
    • /
    • 2007
  • RFID systems large volume of data, it can lead to slower queries. To achieve better query performance, we can partition into active and some nonactive data. In this paper, we propose two approaches of partitioning for efficient query support. The one is average period plus delta partition and the other is adaptive average period partition. We also present the system architecture to manage active data and non-active data and logical database schema. The data manager check the active partition and move all objects from the active store to an archive store associated with an average period plus data and an adaptive average period. Our experiments show the performance of our partitioning methods.

Discussion for the Effectiveness of Radar Data through Distributed Storm Runoff Modeling (분포형 홍수유출 모델링을 통한 레이더 강우자료의 효과분석)

  • Ahn, So Ra;Jang, Cheol Hee;Kim, Sang Ho;Han, Myoung Sun;Kim, Jin Hoon;Kim, Seong Joon
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.55 no.6
    • /
    • pp.19-30
    • /
    • 2013
  • This study is to evaluate the use of dual-polarization radar data for storm runoff modeling in Namgang dam (2,293 $km^2$) watershed using KIMSTORM (Grid-based KIneMatic wave STOrm Runoff Model). The Bisl dual-polarization radar data for 3 typhoons (Khanun, Bolaven, Sanba) and 1 heavy rain event in 2012 were obtained from Han River Flood Control Office. Even the radar data were overall less than the ground data in areal average, the spatio-temporal pattern between the two data was good showing the coefficient of determination ($R^2$) and bias with 0.97 and 0.84 respectively. For the case of heavy rain, the radar data caught the rain passing through the ground stations. The KIMSTORM was set to $500{\times}500$ m resolution and a total of 21,372 cells (156 rows${\times}$137 columns) for the watershed. Using 28 ground rainfall data, the model was calibrated using discharge data at 5 stations with $R^2$, Nash and Sutcliffe Model Efficiency (ME) and Volume Conservation Index (VCI) with 0.85, 0.78 and 1.09 respectively. The calibration results by radar rainfall showed $R^2$, ME and VCI were 0.85, 0.79, and 1.04 respectively. The VCI by radar data was enhanced by 5 %.

Modeling of Hydrologic Time Series using Stochastic Neural Networks Approach (추계학적 신경망 접근법을 이용한 수문학적 시계열의 모형화)

  • Kim, Seong-Won;Kim, Jeong-Heon;Park, Gi-Beom
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2010.05a
    • /
    • pp.1346-1349
    • /
    • 2010
  • The goal of this research is to apply the neural networks models for the disaggregation of the pan evaporation (PE) data, Republic of Korea. The neural networks models consist of generalized regression neural networks model (GRNNM) and multilayer perceptron neural networks model (MLP-NNM), respectively. The disaggregation means that the yearly PE data divides into the monthly PE data. And, for the performances of the neural networks models, they are composed of training and test performances, respectively. The training and test performances consist of the historic, the generated, and the mixed data, respectively. From this research, we evaluate the impact of GRNNM and MLP-NNM for the disaggregation of the nonlinear time series data. We should, furthermore, construct the credible data of the monthly PE from the disaggregation of the yearly PE data, and can suggest the methodology for the irrigation and drainage networks system.

  • PDF

Modeling the 1997 High-Ozone Episode in the Greater Seoul Area with Densely-Distributed Meteorological Observations (상세한 기상관측 자료를 이용한 1997년 서울.수도권 고농도 오존 사례의 모델링)

  • 김진영;김영성
    • Journal of Korean Society for Atmospheric Environment
    • /
    • v.17 no.1
    • /
    • pp.1-17
    • /
    • 2001
  • The high-ozone episode in the Greater Seoul Area for the period of July 27 to August 1 1997 was modeled by the CIT(California Institute of Technology) three-dimensional photochemical model. Emission data were prepared by scaling the NIER(1994) data through and optimization method using VOC measurements in August 1997 and EKMA(Empirical Kinetic Modeling Approach). Two sets of meteorological data were prepared by the diagnostic routine. a part of the CIT model : one only utilized observations from the surface weather stations and the other also utilized observations from the automatic weather stations that were more densely distributed than those from the surface weather stations. The results showed that utilizing observations from the automatic weather stations could represent fine variations in the sind field such as those caused by topography. A better wind field gave better peak ozones and a more reasonable spatial distribution of ozone concentrations. Nevertheless, there were still many differences between predictions and observations particularly for primary pollutant such as NOx and CO. This was probably due to the inaccuracy of emission data that could not resolve both temporal and spatial variations.

  • PDF

Detection and Correction of Noisy Pixels Embedded in NDVI Time Series Based on the Spatio-temporal Continuity (시공간적 연속성을 이용한 오염된 식생지수(GIMMS NDVI) 화소의 탐지 및 보정 기법 개발)

  • Park, Ju-Hee;Cho, A-Ra;Kang, Jeon-Ho;Suh, Myoung-Seok
    • Atmosphere
    • /
    • v.21 no.4
    • /
    • pp.337-347
    • /
    • 2011
  • In this paper, we developed a detection and correction method of noisy pixels embedded in the time series of normalized difference vegetation index (NDVI) data based on the spatio-temporal continuity of vegetation conditions. For the application of the method, 25-year (1982-2006) GIMMS (Global Inventory Modeling and Mapping Study) NDVI dataset over the Korean peninsula were used. The spatial resolution and temporal frequency of this dataset are $8{\times}8km^2$ and 15-day, respectively. Also the land cover map over East Asia is used. The noisy pixels are detected by the temporal continuity check with the reference values and dynamic threshold values according to season and location. In general, the number of noisy pixels are especially larger during summer than other seasons. And the detected noisy pixels are corrected by the iterative method until the noisy pixels are completely corrected. At first, the noisy pixels are replaced by the arithmetic weighted mean of two adjacent NDVIs when the two NDVI are normal. After that the remnant noisy pixels are corrected by the weighted average of NDVI of the same land cover according to the distance. After correction, the NDVI values and their variances are increased and decreased by 5% and 50%, respectively. Comparing to the other correction method, this correction method shows a better result especially when the noisy pixels are occurred more than 2 times consistently and the temporal change rates of NDVI are very high. It means that the correction method developed in this study is superior in the reconstruction of maximum NDVI and NDVI at the starting and falling season.

An Automatic Pattern Recognition Algorithm for Identifying the Spatio-temporal Congestion Evolution Patterns in Freeway Historic Data (고속도로 이력데이터에 포함된 정체 시공간 전개 패턴 자동인식 알고리즘 개발)

  • Park, Eun Mi;Oh, Hyun Sun
    • Journal of Korean Society of Transportation
    • /
    • v.32 no.5
    • /
    • pp.522-530
    • /
    • 2014
  • Spatio-temporal congestion evolution pattern can be reproduced using the VDS(Vehicle Detection System) historic speed dataset in the TMC(Traffic Management Center)s. Such dataset provides a pool of spatio-temporally experienced traffic conditions. Traffic flow pattern is known as spatio-temporally recurred, and even non-recurrent congestion caused by incidents has patterns according to the incident conditions. These imply that the information should be useful for traffic prediction and traffic management. Traffic flow predictions are generally performed using black-box approaches such as neural network, genetic algorithm, and etc. Black-box approaches are not designed to provide an explanation of their modeling and reasoning process and not to estimate the benefits and the risks of the implementation of such a solution. TMCs are reluctant to employ the black-box approaches even though there are numerous valuable articles. This research proposes a more readily understandable and intuitively appealing data-driven approach and developes an algorithm for identifying congestion patterns for recurrent and non-recurrent congestion management and information provision.

Assessment of Natural Attenuation Processes in the Groundwater Contaminated with Trichloroethylene (TCE) Using Multi-Species Reactive Transport Modeling (다성분 반응 이동 모델링을 이용한 트리클로로에틸렌(TCE)으로 오염된 지하수에서의 자연저감 평가)

  • Jeen, Sung-Wook;Jun, Seong-Chun;Kim, Rak-Hyeon;Hwang, Hyoun-Tae
    • Journal of Soil and Groundwater Environment
    • /
    • v.21 no.6
    • /
    • pp.101-113
    • /
    • 2016
  • To properly manage and remediate groundwater contaminated with chlorinated hydrocarbons such as trichloroethylene (TCE), it is necessary to assess natural attenuation processes of contaminants in the aquifer along with investigation of contamination history and aquifer characterization. This study evaluated natural attenuation processes of TCE at an industrial site in Korea by delineating hydrogeochemical characteristics along the flow path of contaminated groundwater, by calculating reaction rate constants for TCE and its degradation products, and by using geochemical and reactive transport modeling. The monitoring data showed that TCE tended to be transformed to cis-1,2-dichloroethene (cis-1,2-DCE) and further to vinyl chloride (VC) via microbial reductive dechlorination, although the degree was not too significant. According to our modeling results, the temporal and spatial distribution of the TCE plume suggested the dominant role of biodegradation in attenuation processes. This study can provide a useful method for assessing natural attenuation processes in the aquifer contaminated with chlorinated hydrocarbons and can be applied to other sites with similar hydrological, microbiological, and geochemical settings.

Discrete HMM Training Algorithm for Incomplete Time Series Data (불완전 시계열 데이터를 위한 이산 HMM 학습 알고리듬)

  • Sin, Bong-Kee
    • Journal of Korea Multimedia Society
    • /
    • v.19 no.1
    • /
    • pp.22-29
    • /
    • 2016
  • Hidden Markov Model is one of the most successful and popular tools for modeling real world sequential data. Real world signals come in a variety of shapes and variabilities, among which temporal and spectral ones are the prime targets that the HMM aims at. A new problem that is gaining increasing attention is characterizing missing observations in incomplete data sequences. They are incomplete in that there are holes or omitted measurements. The standard HMM algorithms have been developed for complete data with a measurements at each regular point in time. This paper presents a modified algorithm for a discrete HMM that allows substantial amount of omissions in the input sequence. Basically it is a variant of Baum-Welch which explicitly considers the case of isolated or a number of omissions in succession. The algorithm has been tested on online handwriting samples expressed in direction codes. An extensive set of experiments show that the HMM so modeled are highly flexible showing a consistent and robust performance regardless of the amount of omissions.

A Simple Syntax for Complex Semantics

  • Lee, Kiyong
    • Proceedings of the Korean Society for Language and Information Conference
    • /
    • 2002.02a
    • /
    • pp.2-27
    • /
    • 2002
  • As pact of a long-ranged project that aims at establishing database-theoretic semantics as a model of computational semantics, this presentation focuses on the development of a syntactic component for processing strings of words or sentences to construct semantic data structures. For design arid modeling purposes, the present treatment will be restricted to the analysis of some problematic constructions of Korean involving semi-free word order, conjunction arid temporal anchoring, and adnominal modification and antecedent binding. The present work heavily relies on Hausser's (1999, 2000) SLIM theory for language that is based on surface compositionality, time-linearity arid two other conditions on natural language processing. Time-linear syntax for natural language has been shown to be conceptually simple and computationally efficient. The associated semantics is complex, however, because it must deal with situated language involving interactive multi-agents. Nevertheless, by processing input word strings in a time-linear mode, the syntax cart incrementally construct the necessary semantic structures for relevant queries and valid inferences. The fragment of Korean syntax will be implemented in Malaga, a C-type implementation language that was enriched for both programming and debugging purposes arid that was particluarly made suitable for implementing in Left-Associative Grammar. This presentation will show how the system of syntactic rules with constraining subrules processes Korean sentences in a step-by-step time-linear manner to incrementally construct semantic data structures that mainly specify relations with their argument, temporal, and binding structures.

  • PDF