• Title/Summary/Keyword: estimation of distribution algorithms

Search Result 84, Processing Time 0.036 seconds

Flame and Smoke Detection for Early Fire Recognition (조기 화재인식을 위한 화염 및 연기 검출)

  • Park, Jang-Sik;Kim, Hyun-Tae;Choi, Soo-Young;Kang, Chang-Soon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.427-430
    • /
    • 2007
  • Many victims and property damages are caused in fires every year. In this paper, flame and smoke detection algorithm by using image processing technique is proposed to early alarm fires. The first decision of proposed algorithms is to check candidate of flame region with its unique color distribution distinguished from artificial lights. If it is not a flame region then we can check to candidate of smoke region by measuring difference of brightness and chroma at present frame. If we just check flame and smoke with only simple brightness and hue, we will occasionally get false alarms. Therefore we also use motion information about candidate of flame and smoke regions. Finally, to determine the flame after motion detection, activity information is used. And in order to determine the smoke, edges detection method is adopted. As a result of simulation with real CCTV video signal, it is shown that the proposed algorithm is useful for early fire recognition.

  • PDF

A Study on ISAR Imaging Algorithm for Radar Target Recognition (표적 구분을 위한 ISAR 영상 기법에 대한 연구)

  • Park, Jong-Il;Kim, Kyung-Tae
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.19 no.3
    • /
    • pp.294-303
    • /
    • 2008
  • ISAR(Inverse Synthetic Aperture Radar) images represent the 2-D(two-dimensional) spatial distribution of RCS (Radar Cross Section) of an object, and they can be applied to the problem of target identification. A traditional approach to ISAR imaging is to use a 2-D IFFT(Inverse Fast Fourier Transform). However, the 2-D IFFT results in low resolution ISAR images especially when the measured frequency bandwidth and angular region are limited. In order to improve the resolution capability of the Fourier transform, various high-resolution spectral estimation approaches have been applied to obtain ISAR images, such as AR(Auto Regressive), MUSIC(Multiple Signal Classification) or Modified MUSIC algorithms. In this study, these high-resolution spectral estimators as well as 2-D IFFT approach are combined with a recently developed ISAR image classification algorithm, and their performances are carefully analyzed and compared in the framework of radar target recognition.

A Study on the Development of Model for Estimating the Thickness of Clay Layer of Soft Ground in the Nakdong River Estuary (낙동강 조간대 연약지반의 지역별 점성토층 두께 추정 모델 개발에 관한 연구)

  • Seongin, Ahn;Dong-Woo, Ryu
    • Tunnel and Underground Space
    • /
    • v.32 no.6
    • /
    • pp.586-597
    • /
    • 2022
  • In this study, a model was developed for the estimating the locational thickness information of the upper clay layer to be used for the consolidation vulnerability evaluation in the Nakdong river estuary. To estimate ground layer thickness information, we developed four spatial estimation models using machine learning algorithms, which are RF (Random Forest), SVR (Support Vector Regression) and GPR (Gaussian Process Regression), and geostatistical technique such as Ordinary Kriging. Among the 4,712 borehole data in the study area collected for model development, 2,948 borehole data with an upper clay layer were used, and Pearson correlation coefficient and mean squared error were used to quantitatively evaluate the performance of the developed models. In addition, for qualitative evaluation, each model was used throughout the study area to estimate the information of the upper clay layer, and the thickness distribution characteristics of it were compared with each other.

Uncertainty Calculation Algorithm for the Estimation of the Radiochronometry of Nuclear Material (핵물질 연대측정을 위한 불확도 추정 알고리즘 연구)

  • JaeChan Park;TaeHoon Jeon;JungHo Song;MinSu Ju;JinYoung Chung;KiNam Kwon;WooChul Choi;JaeHak Cheong
    • Journal of Radiation Industry
    • /
    • v.17 no.4
    • /
    • pp.345-357
    • /
    • 2023
  • Nuclear forensics has been understood as a mendatory component in the international society for nuclear material control and non-proliferation verification. Radiochronometry of nuclear activities for nuclear forensics are decay series characteristics of nuclear materials and the Bateman equation to estimate when nuclear materials were purified and produced. Radiochronometry values have uncertainty of measurement due to the uncertainty factors in the estimation process. These uncertainties should be calculated using appropriate evaluation methods that are representative of the accuracy and reliability. The IAEA, US, and EU have been researched on radiochronometry and uncertainty of measurement, although the uncertainty calculation method using the Bateman equation is limited by the underestimation of the decay constant and the impossibility of estimating the age of more than one generation, so it is necessary to conduct uncertainty calculation research using computer simulation such as Monte Carlo method. This highlights the need for research using computational simulations, such as the Monte Carlo method, to overcome these limitations. In this study, we have analyzed mathematical models and the LHS (Latin Hypercube Sampling) methods to enhance the reliability of radiochronometry which is to develop an uncertainty algorithm for nuclear material radiochronometry using Bateman Equation. We analyzed the LHS method, which can obtain effective statistical results with a small number of samples, and applied it to algorithms that are Monte Carlo methods for uncertainty calculation by computer simulation. This was implemented through the MATLAB computational software. The uncertainty calculation model using mathematical models demonstrated characteristics based on the relationship between sensitivity coefficients and radiative equilibrium. Computational simulation random sampling showed characteristics dependent on random sampling methods, sampling iteration counts, and the probability distribution of uncertainty factors. For validation, we compared models from various international organizations, mathematical models, and the Monte Carlo method. The developed algorithm was found to perform calculations at an equivalent level of accuracy compared to overseas institutions and mathematical model-based methods. To enhance usability, future research and comparisons·validations need to incorporate more complex decay chains and non-homogeneous conditions. The results of this study can serve as foundational technology in the nuclear forensics field, providing tools for the identification of signature nuclides and aiding in the research, development, comparison, and validation of related technologies.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.

Retrieval of Vertical Single-scattering albedo of Asian dust using Multi-wavelength Raman Lidar System (다파장 라만 라이다 시스템을 이용한 고도별 황사의 단산란 알베도 산출)

  • Noh, Youngmin;Lee, Chulkyu;Kim, Kwanchul;Shin, Sungkyun;Shin, Dongho;Choi, Sungchul
    • Korean Journal of Remote Sensing
    • /
    • v.29 no.4
    • /
    • pp.415-421
    • /
    • 2013
  • A new approach to retrieve the single-scattering albedo (SSA) of Asian dust plume, mixed with pollution particles, using multi-wavelength Raman lidar system was suggested in this study. Asian dust plume was separated as dust and non-dust particle (i.e. spherical particle) by the particle depolarization ratio at 532 nm. The vertical profiles of optical properties (the particle extinction coefficient at 355 and 532 nm and backscatter coefficient at 355, 532 and 1064 nm) for non-dust particle were used as input parameter for the inversion algorithm. The inversion algorithm provides the vertical distribution of microphysical properties of non-dust particle only so that the estimation of the SSA for the Asian dust in mixing state was suggested in this study. In order to estimate the SSA for the mixed Asian dust, we combined the SSA of non-dust particles retrieved by the inversion algorithms with assumed the SSA of 0.96 at 532 nm for dust. The retrieved SSA of Asian dust plume by lidar data was compared with the Aerosol Robotics Network (AERONET) retrieved values and showed good agreement.

Real-Time Joint Animation Production and Expression System using Deep Learning Model and Kinect Camera (딥러닝 모델과 Kinect 카메라를 이용한 실시간 관절 애니메이션 제작 및 표출 시스템 구축에 관한 연구)

  • Kim, Sang-Joon;Lee, Yu-Jin;Park, Goo-man
    • Journal of Broadcast Engineering
    • /
    • v.26 no.3
    • /
    • pp.269-282
    • /
    • 2021
  • As the distribution of 3D content such as augmented reality and virtual reality increases, the importance of real-time computer animation technology is increasing. However, the computer animation process consists mostly of manual or marker-attaching motion capture, which requires a very long time for experienced professionals to obtain realistic images. To solve these problems, animation production systems and algorithms based on deep learning model and sensors have recently emerged. Thus, in this paper, we study four methods of implementing natural human movement in deep learning model and kinect camera-based animation production systems. Each method is chosen considering its environmental characteristics and accuracy. The first method uses a Kinect camera. The second method uses a Kinect camera and a calibration algorithm. The third method uses deep learning model. The fourth method uses deep learning model and kinect. Experiments with the proposed method showed that the fourth method of deep learning model and using the Kinect simultaneously showed the best results compared to other methods.

Streamflow Estimation using Coupled Stochastic and Neural Networks Model in the Parallel Reservoir Groups (추계학적모형과 신경망모형을 연계한 병렬저수지군의 유입량산정)

  • Kim, Sung-Won
    • Journal of Korea Water Resources Association
    • /
    • v.36 no.2
    • /
    • pp.195-209
    • /
    • 2003
  • Spatial-Stochastic Neural Networks Model(SSNNM) is used to estimate long-term streamflow in the parallel reservoir groups. SSNNM employs two kinds of backpropagation algorithms, based on LMBP and BFGS-QNBP separately. SSNNM has three layers, input, hidden, and output layer, in the structure and network configuration consists of 8-8-2 nodes one by one. Nodes in input layer are composed of streamflow, precipitation, pan evaporation, and temperature with the monthly average values collected from Andong and Imha reservoir. But some temporal differences apparently exist in their time series. For the SSNNM training procedure, the training sets in input layer are generated by the PARMA(1,1) stochastic model and they covers insufficient time series. Generated data series are used to train SSNNM and the model parameters, optimal connection weights and biases, are estimated during training procedure. They are applied to evaluate model validation using observed data sets. In this study, the new approaches give outstanding results by the comparison of statistical analysis and hydrographs in the model validation. SSNNM will help to manage and control water distribution and give basic data to develop long-term coupled operation system in parallel reservoir groups of the Upper Nakdong River.

Request Distribution for Fairness with a Non-Periodic Load-Update Mechanism for Cyber Foraging Dynamic Applications in Web Server Cluster (웹 서버 클러스터에서 Cyber Foraging 응용을 위한 비주기적 부하 갱신을 통한 부하 분산 기법)

  • Lu, Xiaoyi;Fu, Zhen;Choi, Won-Il;Kang, Jung-Hun;Ok, Min-Hwan;Park, Myong-Soon
    • The KIPS Transactions:PartA
    • /
    • v.14A no.1 s.105
    • /
    • pp.63-72
    • /
    • 2007
  • This paper introduces a load-balancing algorithm focusing on distributing web requests evenly into the web cluster servers. The load-balancing algorithms based on conventional periodic load-information update mechanism are not suitable for dynamic page applications, which are common in Cyber Foraging services, due to the problems caused by periodic synchronized load-information updating and the difficulties of work load estimation caused by embedded executing scripts of dynamic pages. Update-on-Finish algorithm solves this problem by using non-periodic load-update mechanism, and the web switch knows the servers' real load information only after their reporting and then distributes new loads according to the new load-information table, however it results in much communication overhead. Our proposed mechanism improve update-on-finish algorithm by using K-Percents-Finish mechanism and thus largely reduce the communication overhead. Furthermore, we consider the different capabilities of servers with a threshold Ti value and propose a load-balancing algorithm for servers with various capabilities. Simulation results show that the proposed K-Percents-Finish Reporting mechanism can at least reduce 50% communication overhead than update-on-finish approach while sustaining better load balancing performance than periodic mechanisms in related work.

Change Analysis of Aboveground Forest Carbon Stocks According to the Land Cover Change Using Multi-Temporal Landsat TM Images and Machine Learning Algorithms (다시기 Landsat TM 영상과 기계학습을 이용한 토지피복변화에 따른 산림탄소저장량 변화 분석)

  • LEE, Jung-Hee;IM, Jung-Ho;KIM, Kyoung-Min;HEO, Joon
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.4
    • /
    • pp.81-99
    • /
    • 2015
  • The acceleration of global warming has required better understanding of carbon cycles over local and regional areas such as the Korean peninsula. Since forests serve as a carbon sink, which stores a large amount of terrestrial carbon, there has been a demand to accurately estimate such forest carbon sequestration. In Korea, the National Forest Inventory(NFI) has been used to estimate the forest carbon stocks based on the amount of growing stocks per hectare measured at sampled location. However, as such data are based on point(i.e., plot) measurements, it is difficult to identify spatial distribution of forest carbon stocks. This study focuses on urban areas, which have limited number of NFI samples and have shown rapid land cover change, to estimate grid-based forest carbon stocks based on UNFCCC Approach 3 and Tier 3. Land cover change and forest carbon stocks were estimated using Landsat 5 TM data acquired in 1991, 1992, 2010, and 2011, high resolution airborne images, and the 3rd, 5th~6th NFI data. Machine learning techniques(i.e., random forest and support vector machines/regression) were used for land cover change classification and forest carbon stock estimation. Forest carbon stocks were estimated using reflectance, band ratios, vegetation indices, and topographical indices. Results showed that 33.23tonC/ha of carbon was sequestrated on the unchanged forest areas between 1991 and 2010, while 36.83 tonC/ha of carbon was sequestrated on the areas changed from other land-use types to forests. A total of 7.35 tonC/ha of carbon was released on the areas changed from forests to other land-use types. This study was a good chance to understand the quantitative forest carbon stock change according to the land cover change. Moreover the result of this study can contribute to the effective forest management.