• Title/Summary/Keyword: Number magnitude estimation

Search Result 56, Processing Time 0.029 seconds

Spatio-temporal dependent errors of radar rainfall estimate for rainfall-runoff simulation

  • Ko, Dasang;Park, Taewoong;Lee, Taesam;Lee, Dongryul
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2016.05a
    • /
    • pp.164-164
    • /
    • 2016
  • Radar rainfall estimates have been widely used in calculating rainfall amount approximately and predicting flood risks. The radar rainfall estimates have a number of error sources such as beam blockage and ground clutter hinder their applications to hydrological flood forecasting. Moreover, it has been reported in paper that those errors are inter-correlated spatially and temporally. Therefore, in the current study, we tested influence about spatio-temporal errors in radar rainfall estimates. Spatio-temporal errors were simulated through a stochastic simulation model, called Multivariate Autoregressive (MAR). For runoff simulation, the Nam River basin in South Korea was used with the distributed rainfall-runoff model, Vflo. The results indicated that spatio-temporal dependent errors caused much higher variations in peak discharge than spatial dependent errors. To further investigate the effect of the magnitude of time correlation among radar errors, different magnitudes of temporal correlations were employed during the rainfall-runoff simulation. The results indicated that strong correlation caused a higher variation in peak discharge. This concluded that the effects on reducing temporal and spatial correlation must be taken in addition to correcting the biases in radar rainfall estimates. Acknowledgements This research was supported by a grant from a Strategic Research Project (Development of Flood Warning and Snowfall Estimation Platform Using Hydrological Radars), which was funded by the Korea Institute of Construction Technology.

  • PDF

A simple model for ground surface settlement induced by braced excavation subjected to a significant groundwater drawdown

  • Zhang, Runhong;Zhang, Wengang;Goh, A.T.C.;Hou, Zhongjie;Wang, Wei
    • Geomechanics and Engineering
    • /
    • v.16 no.6
    • /
    • pp.635-642
    • /
    • 2018
  • Braced excavation systems are commonly required to ensure stability in construction of basements for shopping malls, underground transportation and other habitation facilities. For excavations in deposits of soft clays or residual soils, stiff retaining wall systems such as diaphragm walls are commonly adopted to restrain the ground movements and wall deflections in order to prevent damage to surrounding buildings and utilities. The ground surface settlement behind the excavation is closely associated with the magnitude of basal heave and the wall deflections and is also greatly influenced by the possible groundwater drawdown caused by potential wall leakage, flow from beneath the wall, flow from perched water and along the wall interface or poor panel connections due to the less satisfactory quality. This paper numerically investigates the influences of excavation geometries, the system stiffness, the soil properties and the groundwater drawdown on ground surface settlement and develops a simplified maximum surface settlement Logarithm Regression model for the maximum ground surface settlement estimation. The settlements estimated by this model compare favorably with a number of published and instrumented records.

The Joint analysis of galaxy clustering and weak lensing from the Deep Lens Survey to constrain cosmology and baryonic feedback

  • Yoon, Mijin;Jee, M. James;Tyson, J. Tony
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.79.2-79.2
    • /
    • 2019
  • Based on three types of 2-point statistics (galaxy clustering, galaxy-galaxy lensing, and cosmic shear power spectra) from the Deep Lens Survey (DLS), we constrain cosmology and baryonic feedback. The DLS is a deep survey, so-called a precursor to LSST, reaching down to ~27th magnitude in BVRz' over 20 deg2. To measure the three power spectra, we choose two lens galaxy populations centered at z ~0.27 and 0.54 and two source galaxy populations centered at z ~0.64 and 1.1, with more than 1 million galaxies. We perform a number of consistency tests to confirm the reliability of the measurements. We calibrated photo-z estimation of the lens galaxies and validated the result with galaxy cross-correlation measurement. The B-mode signals, indicative of potential systematics, are found to be consistent with zero. The two cosmological results independently obtained from the cosmic shear and the galaxy clustering + galaxy-galaxy lensing measurements agree well with each other. Also, we verify that cosmological results between bright and faint sources are consistent. While there exist some weak lensing surveys showing a tension with Planck, the DLS constraint on S8 agrees nicely with the Planck result. Using the HMcode approach derived from the OWLS simulation, we constrain the strength of baryonic feedback. The DLS results hint at the possibility that the actual AGN feedback may be stronger than the one implemented in the current state-of-the-art simulations.

  • PDF

Estimation of the soil liquefaction potential through the Krill Herd algorithm

  • Yetis Bulent Sonmezer;Ersin Korkmaz
    • Geomechanics and Engineering
    • /
    • v.33 no.5
    • /
    • pp.487-506
    • /
    • 2023
  • Looking from the past to the present, the earthquakes can be said to be type of disaster with most casualties among natural disasters. Soil liquefaction, which occurs under repeated loads such as earthquakes, plays a major role in these casualties. In this study, analytical equation models were developed to predict the probability of occurrence of soil liquefaction. In this context, the parameters effective in liquefaction were determined out of 170 data sets taken from the real field conditions of past earthquakes, using WEKA decision tree. Linear, Exponential, Power and Quadratic models have been developed based on the identified earthquake and ground parameters using Krill Herd algorithm. The Exponential model, among the models including the magnitude of the earthquake, fine grain ratio, effective stress, standard penetration test impact number and maximum ground acceleration parameters, gave the most successful results in predicting the fields with and without the occurrence of liquefaction. This proposed model enables the researchers to predict the liquefaction potential of the soil in advance according to different earthquake scenarios. In this context, measures can be realized in regions with the high potential of liquefaction and these measures can significantly reduce the casualties in the event of a new earthquake.

A Study on the Estimation Method of Operational Delay Cost in Bus Accidents using Transportation Card Data (교통카드자료를 이용한 버스 사고 시 운행지연비용 산정 방법론에 관한 연구)

  • Seo, Ji-Hyeon;Lee, Sang-Soo;Nam, Doohee
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.17 no.5
    • /
    • pp.29-38
    • /
    • 2018
  • This study aims to propose a method for the estimation of operational delay cost using transportation card data in bus accidents. Average operational delay time from bus accidents was surveyed among 12 bus companies through an interview method. Then, the operational delay cost was estimated using actual traffic accident data and transportation card data. Results showed that average loss time per bus accident was found to be 45 minutes. In addition, total occupancy of 659 was estimated for the accidents investigated using transportation card data, resulting a total loss time of 494.25 hours. An estimated operational delay cost was 186.9 thousand won per accident, which was 6.37% of social agency cost. The magnitude of this number implied that operational delay cost may have a significant impact on traffic accident cost if included.

Analysis of error source in subjective evaluation results on Taekwondo Poomsae: Application of generalizability theory (태권도 품새 경기의 주관적 평가결과의 오차원 분석: 일반화가능도 이론 적용)

  • Cho, Eun Hyung
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.2
    • /
    • pp.395-407
    • /
    • 2016
  • This study aims to apply the G-theory for estimation of reliability of evaluation scores between raters on Taekwondo Poomsae rating categories. Selecting a number of game days and raters as multiple error sources, we analyzed the error sources caused by relative magnitude of error variances of interaction between the factors and proceeded with D-study based on the results of G-study for optimal determination of measurement condition. The results showed below. The estimated outcomes of variance component for accuracy among the Taekwondo Poomsae categories with G-theory showed that impact of error was the biggest influence factor in raters conditions and in order of interaction in subjects and between subjects, also impact of variance component estimation error on expression category was the major influence factor in interaction and in order of the between subjects and raters. Finally, the result of generalizability coefficient estimation via D-study showed that measurement condition of optimal level depend on the number of raters was 8 persons of raters on accuracy category, and stable reliability on expression category was gained when the raters were 7 persons.

A Study on Improving the National Highway Traffic Counts System : With Focus on Short Duration Counts and Continuous Counts (일반국도 교통량조사의 조사 유형별 개선 방안)

  • Lee, Sang Hyup;Ha, Jung Ah;Yoon, Taekwan
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.32 no.3D
    • /
    • pp.205-212
    • /
    • 2012
  • The national highway traffic counts system consists of short duration counts and continuous counts. Unlike continuous counts, short duration counts are performed by collection of a few days period and thus, the magnitude of deviation of collected data from AADT varies depending upon when data collection takes place. Therefore, this study was done to find out the best months and days of data collection of each highway classification in order to enhance the accuracy of AADT estimation. Continuous counts, another type of the national traffic counts system, are performed by collection of 365-day period using a permanent traffic counter. Therefore, it is necessary to keep the number of days for which the counter malfunctions to a minimum in order to enhance the accuracy of data. However, from time to time the permanent traffic counter malfunctions due to various causes and thus, cannot collect data. Therefore, this study was done to find out whether the age of counter, the ratio of heavy vehicle volume to total traffic volume, etc. could be the direct causes of counter's malfunction based on the number of maintenance for a certain time period.

A deep and High-resolution Study of Ultra-diffuse Galaxies in Distant Massive Galaxy Clusters

  • Lee, Jeong Hwan;Kang, Jisu;Jang, In Sung;Lee, Myung Gyoon
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.38.4-38.4
    • /
    • 2019
  • Ultra-diffuse galaxies (UDGs) are intriguing in the sense that they are much larger than dwarf galaxies but have much lower surface brightness than normal galaxies. To date, UDGs have been found only in the local universe. Taking advantage of deep and high-resolution HST images, we search for UDGs in massive galaxy clusters in the distant universe. In this work, we present our search results of UDGs in three massive clusters of the Hubble Frontier Fields: Abell 2744 (z=0.308), Abell S1063 (z=0.348), and Abell 370 (z=0.375). These clusters are the most distant and massive among the host systems of known UDGs. The color-magnitude diagrams of these clusters show that UDGs are mainly located in the faint end of the red sequence. This means that most UDGs in these clusters consist of old stars. Interestingly, we found a few blue UDGs, which implies that they had recent star formation. The radial number densities of UDGs clearly decrease in the central region of the clusters in contrast to those of bright galaxies which keep rising. This implies that a large fraction of UDGs in the central region were tidally disrupted. These features are consistent with those of UDGs in nearby galaxy clusters. We estimate the total number of UDGs (N(UDG)) in each cluster. The abundance of UDGs shows a tight relation with the virial masses (M_200) of thier host systems: M_200 \propto N(UDG)^(1.01+/-0.05). This slope is found to be very close to one, indicating that efficiency of UDGs does not significantly depend on the host environments. Furthermore, estimation of dynamical masses of UDGs indicates that most UDGs have dwarf-like masses (M_200 < 10^11 M_Sun), but a few UDGs have $L{\ast}$-like masses (M_200 > 10^11 M_Sun). In summary, UDGs in distant massive clusters are found to be similar to those in the local universe.

  • PDF

Epicenter Estimation Using Real-Time Event Packet of Quanterra digitizer (Quanterra 기록계의 실시간 이벤트 패킷을 이용한 진앙 추정)

  • Lim, In-Seub;Sheen, Dong-Hoon;Shin, Jin-Soo;Jung, Soon-Key
    • Geophysics and Geophysical Exploration
    • /
    • v.12 no.4
    • /
    • pp.316-327
    • /
    • 2009
  • A standard for national seismological observatory was proposed on 1999. Since then, Quanterra digitizer has been installed and is operating on almost all of seismic stations which belong to major seismic monitoring organizations. Quanterra digitizer produce and transmit real-time event packet and data packet. Characteristics of event packet and arrival time of each channel's data packet on data center were investigated. Packet selection criteria using signal to noise ratio (hereafter SNR) and signal period from real-time event packet based on 100 samples per second (hereafter sps) velocity data were developed. Estimation of epicenter using time information of the selected event packet were performed and tested. A series of experiment show that event packets were received approximately 3~4 second earlier than data packets and the number of event packet was only 0.3% compare to data packets. Just about 5% against all of event packets were selected as event packet were related P wave of real earthquake. Using the selected event packets we can estimate an epicenter with misfit less than 10 km within 20 sec for local earthquake over magnitude 2.5.

A Novel Test Structure for Process Control Monitor for Un-Cooled Bolometer Area Array Detector Technology

  • Saxena, R.S.;Bhan, R.K.;Jalwania, C.R.;Lomash, S.K.
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.6 no.4
    • /
    • pp.299-312
    • /
    • 2006
  • This paper presents the results of a novel test structure for process control monitor for uncooled IR detector technology of microbolometer arrays. The proposed test structure is based on resistive network configuration. The theoretical model for resistance of this network has been developed using 'Compensation' and 'Superposition' network theorems. The theoretical results of proposed resistive network have been verified by wired hardware testing as well as using an actual 16x16 networked bolometer array. The proposed structure uses simple two-level metal process and is easy to integrate with standard CMOS process line. The proposed structure can imitate the performance of actual fabricated version of area array closely and it uses only 32 pins instead of 512 using conventional method for a $16{\times}16$ array. Further, it has been demonstrated that the defective or faulty elements can be identified vividly using extraction matrix, whose values are quite similar(within the error of 0.1%), which verifies the algorithm in small variation case(${\sim}1%$ variation). For example, an element, intentionally damaged electrically, has been shown to have the difference magnitude much higher than rest of the elements(1.45 a.u. as compared to ${\sim}$ 0.25 a.u. of others), confirming that it is defective. Further, for the devices having non-uniformity ${\leq}$ 10%, both the actual non-uniformity and faults are predicted well. Finally, using our analysis, we have been able to grade(pass or fail) 60 actual devices based on quantitative estimation of non-uniformity ranging from < 5% to > 20%. Additionally, we have been able to identify the number of bad elements ranging from 0 to > 15 in above devices.