• Title/Summary/Keyword: Magnitude Estimation

Search Result 542, Processing Time 0.021 seconds

Damage Detection of Building Structures Using Ambient Vibration Measuresent (자연진동을 이용한 건물의 건전도 평가)

  • Kim, Sang Yun;Kwon, Dae Hong;Yoo, Suk Hyeong;Noh, Sam Young;Shin, Sung Woo
    • KIEAE Journal
    • /
    • v.7 no.4
    • /
    • pp.147-152
    • /
    • 2007
  • Numerous non-destructive tests(NDT) to assess the safety of real structures have been developed. System identification(SI) techniques using dynamic responses and behaviors of structural systems become an outstanding issue of researchers. However the conventional SI techniques are identified to be non-practical to the complex and tall buildings, due to limitation of the availability of an accurate data that is magnitude or location of external loads. In most SI approaches, the information on input loading and output responses must be known. In many cases, measuring the input information may take most of the resources, and it is very difficult to accurately measure the input information during actual vibrations of practical importance, e.g., earthquakes, winds, micro seismic tremors, and mechanical vibration. However, the desirability and application potential of SI to real structures could be highly improved if an algorithm is available that can estimate structural parameters based on the response data alone without the input information. Thus a technique to estimate structural properties of building without input measurement data and using limited response is essential in structural health monitoring. In this study, shaking table tests on three-story plane frame steel structures were performed. Out-put only model analysis on the measured data was performed, and the dynamic properties were inverse analyzed using least square method in time domain. In results damage detection was performed in each member level, which was performed at story level in conventional SI techniques of frequency domain.

Estimation of Coefficient of Consolidation Using Piezocone Dissipation Test in Normally Consolidated Clays (정규압밀점토에서의 피에조 콘 소산시험을 이용한 수평압밀계수의 산정)

  • 임형덕;이우진;김대규
    • Journal of the Korean Geotechnical Society
    • /
    • v.19 no.5
    • /
    • pp.145-154
    • /
    • 2003
  • In this study, the variation in excess pore pressure during dissipation is estimated by using successive cavity expansion theory and finite difference technique based on axisymmetric uncoupled linear consolidation theory with separate consideration of magnitude and initial distribution $\Delta{u}_{oct}$induced by changes of octahedral normal stress, and $\Delta{u}_{shear}$ induced by changes of octahedral shear stress. The coefficient of consolidation is also estimated by trial and error procedure until the predicted dissipation curve matches the measured curve at a typical degree of dissipation. The proposed method is applied to the results of miniature piezocone tests at Louisiana State University calibration chamber system. Based on the results of interpretation and the comparison with experimental measurements and those from other solutions, the prediction dissipation curves show a good match with those measured during dissipation tests and the values of coefficient of consolidation estimated by proposed method are more close to the range of laboratory measurements than those of other theories.

MASSIVE STRUCTURES OF GALAXIES AT HIGH REDSHIFTS IN THE GREAT OBSERVATORIES ORIGINS DEEP SURVEY FIELDS

  • Kang, Eugene;Im, Myungshin
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.1
    • /
    • pp.21-55
    • /
    • 2015
  • If the Universe is dominated by cold dark matter and dark energy as in the currently popular ${\Lambda}CDM$ cosmology, it is expected that large scale structures form gradually, with galaxy clusters of mass $M{\geq}10^{14}M_{\odot}$ appearing at around 6 Gyrs after the Big Bang (z ~ 1). Here, we report the discovery of 59 massive structures of galaxies with masses greater than a few times $10^{13}M_{\odot}$ at redshifts between z = 0.6 and 4.5 in the Great Observatories Origins Deep Survey fields. The massive structures are identified by running top-hat filters on the two dimensional spatial distribution of magnitude-limited samples of galaxies using a combination of spectroscopic and photometric redshifts. We analyze the Millennium simulation data in a similar way to the analysis of the observational data in order to test the ${\Lambda}CDM$ cosmology. We find that there are too many massive structures (M > $7{\times}10^{13}M_{\odot}$) observed at z > 2 in comparison with the simulation predictions by a factor of a few, giving a probability of < 1/2500 of the observed data being consistent with the simulation. Our result suggests that massive structures have emerged early, but the reason for the discrepancy with the simulation is unclear. It could be due to the limitation of the simulation such as the lack of key, unrecognized ingredients (strong non-Gaussianity or other baryonic physics), or simply a difficulty in the halo mass estimation from observation, or a fundamental problem of the ${\Lambda}CDM$ cosmology. On the other hand, the over-abundance of massive structures at high redshifts does not favor heavy neutrino mass of ~ 0.3 eV or larger, as heavy neutrinos make the discrepancy between the observation and the simulation more pronounced by a factor of 3 or more.

A Preliminary Study of Seismic Risk in Pyongyang, North Korea (북한 평양의 지진위험도 분석 선행연구)

  • Kang, Su Young;Kim, Kwang-Hee
    • The Journal of the Petrological Society of Korea
    • /
    • v.25 no.4
    • /
    • pp.325-334
    • /
    • 2016
  • Both 1900 years of historic literature and recent instrumental seismic records indicate the Korean Peninsula has repeatedly experienced small and large earthquakes. This study has used historical and instrumental records of Korea to investigate the characteristics of earthquakes in the peninsula. Results of GIS spatial analyses indicate Pyongyang, the capital of North Korea, is more vulnerable to the earthquake hazard than that of other regions in the Korean Peninsula. It is also noted that Pyongyang is exposed to high risks of other natural and social disasters because of the high population density and concentrated infra structures. Scenario shake map drawn up assuming a magnitude 6.7 earthquake, which was experienced in A.D. 502 in the area, indicates that 51.1% of the city are exposed to PGA 0.24 g or higher. Recent statistics by the Statistics Korea also indicates the North Korea is far more vulnerable to disasters than those in the South Korea. Results of the preliminary study provide essential information for comprehensive understanding of earthquake hazard estimation in Korea including the North Korea.

Quality Assessment of Images Projected Using Multiple Projectors

  • Kakli, Muhammad Umer;Qureshi, Hassaan Saadat;Khan, Muhammad Murtaza;Hafiz, Rehan;Cho, Yongju;Park, Unsang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.6
    • /
    • pp.2230-2250
    • /
    • 2015
  • Multiple projectors with partially overlapping regions can be used to project a seamless image on a large projection surface. With the advent of high-resolution photography, such systems are gaining popularity. Experts set up such projection systems by subjectively identifying the types of errors induced by the system in the projected images and rectifying them by optimizing (correcting) the parameters associated with the system. This requires substantial time and effort, thus making it difficult to set up such systems. Moreover, comparing the performance of different multi-projector display (MPD) systems becomes difficult because of the subjective nature of evaluation. In this work, we present a framework to quantitatively determine the quality of an MPD system and any image projected using such a system. We have divided the quality assessment into geometric and photometric qualities. For geometric quality assessment, we use Feature Similarity Index (FSIM) and distance-based Scale Invariant Feature Transform (SIFT). For photometric quality assessment, we propose to use a measure incorporating Spectral Angle Mapper (SAM), Intensity Magnitude Ratio (IMR) and Perceptual Color Difference (ΔE). We have tested the proposed framework and demonstrated that it provides an acceptable method for both quantitative evaluation of MPD systems and estimation of the perceptual quality of any image projected by them.

Influence of Sensor Noise on the Localization Error in Multichannel SQUID Gradiometer System (다채널 스퀴드 미분계에서 센서 잡음이 위치추정 오차에 미치는 영향)

  • 김기웅;이용호;권혁찬;김진목;정용석;강찬석;김인선;박용기;이순걸
    • Progress in Superconductivity
    • /
    • v.5 no.2
    • /
    • pp.98-104
    • /
    • 2004
  • We analyzed a noise-sensitivity profile of a specific SQUID sensor system for the localization of brain activity. The location of a neuromagnetic current source is estimated from the recording of spatially distributed SQUID sensors. According to the specific arrangement of the sensors, each site in the source space has different sensitivity, that is, the difference in the lead field vectors. Conversely, channel noises on each sensor will give a different amount of the estimation error to each of the source sites. e.g., a distant source site from the sensor system has a small lead-field vector in magnitude and low sensitivity. However, when we solve the inverse problem from the recorded sensor data, we use the inverse of the lead-field vector that is rather large, which results in an overestimated noise power on the site. Especially, the spatial sensitivity profile of a gradiometer system measuring tangential fields is much more complex than a radial magnetometer system. This is one of the causes to make the solutions of inverse problems unstable on intervening of the sensor noise. In this study, in order to improve the localization accuracy, we calculated the noise-sensitivity profile of our 40-channel planar SQUID gradiometer system, and applied it as a normalization weight factor to the source localization using synthetic aperture magnetometry.

  • PDF

Study on the Fire Safety Estimation for a Pilot LNG Storage Tank (PILOT LNG저장탱크의 화재안전성 평가에 관한 연구)

  • 고재선;김효
    • Fire Science and Engineering
    • /
    • v.18 no.3
    • /
    • pp.57-73
    • /
    • 2004
  • Quantitative safety analysis through a fault tree method has been conducted for a fire broken out over the spilling LNG from a pilot LNG tank, which may have 4 types of scenarios causing potentially risky results. When we consider LNG release from venting pipelines as a first event, any specific radius of Low Flammable Limit(LFL) has not been built up. The second case of LNG outflow from the rupture of storage tank which will be the severest has been analyzed and the results revealed various diffusion areas to the leaking times even with the same amount of LNG release. As a third case LNG leakage from the inlet/outlet pipelines was taken into consider. The results showed no significant differences of LFL radii between the two spilling times of 10 and 60 minutes. Hence, we have known the most affecting factor on the third scenario is an initial amount of LNG release. Finally, the extent of LFL was calculated when LNG pipelines around the dike area were damaged. In addition, consequence analysis has been also performed to acquire the heat radiation and flame magnitude for each case.

A Study on the Process of Estimating the Amount of Materials for Client's Decision-Making Support in Space Programming Stage of Pre-design BIM -focusing on Building Interior Finishing- (건축 기획 BIM의 공간 프로그래밍 단계에서 발주자 의사결정지원을 위한 물량예측 방법론에 관한 연구 -건축마감을 중심으로-)

  • Jun, Yeong-Jin;Kim, Ju-Hyung;Kim, Jae-Jun
    • Journal of The Korean Digital Architecture Interior Association
    • /
    • v.10 no.3
    • /
    • pp.19-28
    • /
    • 2010
  • The construction projects are recently having changes in their magnitude and complexity. Therefore, the amount of information created and managed by participants over project phases is enormous and this may cause difficulties in consistent and integrated data management. Because of the change in construction projects, there is a need to apply more logical and systematic ways to deal integrated data management. For the solution to this, BIM(Building Information Modeling), a new paradigm for integrated management of the information over project life-cycle, has been seriously considered. Also, the Korean Public Procurement Service announced that project over 50 billion Korean Won should introduce BIM into procurement starting from 2012. However, the studies and development have lack on studies of applying BIM and managing the data made using BIM in pre-design and maintenance stage. In pre-design stage, the concept of schematic design model is made to support for making major decisions concerning the size, shape and cost of the project. To decide the cost for the building in this stage by making use of BIM, estimating the amount of building materials used for constructing should be preceded. In this study, the pre-design BIM is explained to gain a better understanding of its process, since this paper focused on space programming stage. Finally, the paper suggests the concept process of estimating the amount of materials in building interior finishing from selecting the type for the elements of each space made to support the client for making decisions in space programming stage based on pre-design BIM.

Shell Partition-based Constant Modulus Algorithm (Shell 분할 기반 CMA)

  • Lee, Gi-Hun;Park, Rae-Hong;Park, Jae-Hyuk;Lee, Byung-Uk
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.1
    • /
    • pp.133-143
    • /
    • 1996
  • The constant modulus algorithm (CMA), one of the widely used blind equalization algorithms, equalizes channels using the second-order statistic of equalizer outputs. The performance of the CMA for multi-level signals such as the quadrature amplitude modulation (QAM) signal degrades because the CMA maps all signal power onto a single modulus. in this paper, to improve the equalization performance of a QAM system, we propose a shell partitioning method based on error magnitude. We assume the probability distribution of an equalizer output as Gaussian, and obtain decision boundaries by maximum likelihood estimation based on the fact that the distribution of the equalizer output power is noncentral $x^2$. The proposed CMA constructs a multi-moduli equlization system based on the fact that each shell separated by decision boundaries employs a single modulus. Computer simulation results for 32-QAM and 64-QAM show the effectiveness of the proposed algorithm.

  • PDF

A Novel Test Structure for Process Control Monitor for Un-Cooled Bolometer Area Array Detector Technology

  • Saxena, R.S.;Bhan, R.K.;Jalwania, C.R.;Lomash, S.K.
    • JSTS:Journal of Semiconductor Technology and Science
    • /
    • v.6 no.4
    • /
    • pp.299-312
    • /
    • 2006
  • This paper presents the results of a novel test structure for process control monitor for uncooled IR detector technology of microbolometer arrays. The proposed test structure is based on resistive network configuration. The theoretical model for resistance of this network has been developed using 'Compensation' and 'Superposition' network theorems. The theoretical results of proposed resistive network have been verified by wired hardware testing as well as using an actual 16x16 networked bolometer array. The proposed structure uses simple two-level metal process and is easy to integrate with standard CMOS process line. The proposed structure can imitate the performance of actual fabricated version of area array closely and it uses only 32 pins instead of 512 using conventional method for a $16{\times}16$ array. Further, it has been demonstrated that the defective or faulty elements can be identified vividly using extraction matrix, whose values are quite similar(within the error of 0.1%), which verifies the algorithm in small variation case(${\sim}1%$ variation). For example, an element, intentionally damaged electrically, has been shown to have the difference magnitude much higher than rest of the elements(1.45 a.u. as compared to ${\sim}$ 0.25 a.u. of others), confirming that it is defective. Further, for the devices having non-uniformity ${\leq}$ 10%, both the actual non-uniformity and faults are predicted well. Finally, using our analysis, we have been able to grade(pass or fail) 60 actual devices based on quantitative estimation of non-uniformity ranging from < 5% to > 20%. Additionally, we have been able to identify the number of bad elements ranging from 0 to > 15 in above devices.