• Title/Summary/Keyword: Trimmed mean

Search Result 61, Processing Time 0.023 seconds

On the Determination Method of Background Aerosol Concentration (에어로졸의 배경농도 산정기법에 관한 연구)

  • Heo, Junghwa;Kim, Sang-Woo;Yoon, Soon-Chang;Kim, Ji-Hyoung;Kim, Man-Hae;Kim, Yumi
    • Atmosphere
    • /
    • v.23 no.4
    • /
    • pp.501-511
    • /
    • 2013
  • In this study, we estimate the background concentration of black carbon (BC) mass concentration measured at Gosan Climate Observatory from January 2008 to December 2011 by applying six methods: (1) Mean and Median (2) Trimmed mean method deployed in Interagency Monitoring of Protected Visual Environments (IMPROVE) network program (hereafter, IMPROVE method), (3) Concentration-frequency distribution analysis method, (4) Advanced Global Atmospheric Gases Experiment (AGAGE) method (hereafter, AGAGE method), (5) Kaufman et al. (2001) method (hereafter, Kaufman method), and (6) Airmass sector analysis. The background concentration of BC mass concentrations is estimated to be about 400~900 ng $m^{-3}$, but each method shows a large difference. The estimated background concentration, in general, is arranged in the order of: mean > IMPROVE method > median > Kaufman method > concentration-frequency distribution analysis method > AGAGE method. The background concentration estimated by the airmass sector analysis is found to be about 550 ng $m^{-3}$ which is lower than those estimated by other methods. When we apply the same analytical period (i.e., 4-day and 6-day) to both AGAGE and Kaufman methods, the estimated background concentrations are quite similar. However, further researches on the development of statistical method for estimating background concentration for various gas-phase and particulate pollutants under different environment are needed.

Numerical and Experimental Investigation of the Heating Process of Glass Thermal Slumping

  • Zhao, Dachun;Liu, Peng;He, Lingping;Chen, Bo
    • Journal of the Optical Society of Korea
    • /
    • v.20 no.2
    • /
    • pp.314-320
    • /
    • 2016
  • The glass thermal forming process provides a high volume, low cost approach to producing aspherical reflectors for x-ray optics. Thin glass sheets are shaped into mirror segments by replicating the mold shape at high temperature. Heating parameters in the glass thermal slumping process are crucial to improve surface quality of the formed glass. In this research, the heating process of a thermal slumping glass sheet on a concave parabolic mold was simulated with the finite-element method (FEM) to investigate the effects of heating rate and soaking temperature. Based on the optimized heating conditions, glass samples 0.5 mm thick were formed in a furnace with a steel concave parabolic mold. The figure errors of the formed glass were measured and discussed in detail. It was found that the formed glass was not fully slumped at the edges, and should be trimmed to achieve better surface deviation. The root-mean-square (RMS) deviation and peak-valley (PV) deviation between formed glass and mold along the axial direction were 2.3 μm and 4.7 μm respectively.

MXTM-CFAR Processor and Its Performance Analysis (MXTM-CFAR 처리기와 그 성능분석)

  • 김재곤;김응태;송익호;김형명
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.17 no.7
    • /
    • pp.719-729
    • /
    • 1992
  • An improved MXTM (maximum trimmed mean) -CFAR (constant false alarm rate) processor is proposed to reduce false alarm rates In detecting radar targets and Its performance character is ticsare analyzed to be compared with those of other CFAR processors. The proposed MXTM-CFAR processor is obtained by combining the GO (greatest of ) -CFAR processor reducing excessive falsealarm rate at riutter edges with the TM-CFAR processor showing good performances In homo-geneous Jnonhornog eneous background. Performance analyses have been done by computing detection probability, constant false alarm rate and detection thresholds under the homogeneous or multiple target environments and at the clutter edges. Analysis results how that the proposed CFAR processor maintains its performance as good as those of,05(order statistics) and TM-CFAR inhomogeneous and multiple target environments and Can reduce the false alarm rate at clutter edges. Overall computing time hfs been also reduced.

  • PDF

L-Estimation for the Parameter of the AR(l) Model (AR(1) 모형의 모수에 대한 L-추정법)

  • Han Sang Moon;Jung Byoung Cheal
    • The Korean Journal of Applied Statistics
    • /
    • v.18 no.1
    • /
    • pp.43-56
    • /
    • 2005
  • In this study, a robust estimation method for the first-order autocorrelation coefficient in the time series model following AR(l) process with additive outlier(AO) is investigated. We propose the L-type trimmed least squares estimation method using the preliminary estimator (PE) suggested by Rupport and Carroll (1980) in multiple regression model. In addition, using Mallows' weight function in order to down-weight the outlier of X-axis, the bounded-influence PE (BIPE) estimator is obtained and the mean squared error (MSE) performance of various estimators for autocorrelation coefficient are compared using Monte Carlo experiments. From the results of Monte-Carlo study, the efficiency of BIPE(LAD) estimator using the generalized-LAD to preliminary estimator performs well relative to other estimators.

Implementation of Digital Filter for Additive White Gaussian Noise Removal (부가 백색 가우스 잡음 제거를 위한 디지털 필터 구현)

  • Cheon, Bong-Won;Kwon, Se-Ik;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.10a
    • /
    • pp.473-476
    • /
    • 2017
  • As the society has developed into a digital information age society, a lot of electronic communication equipments are popularized. However, there are various causes of noise during signal transmission between communication devices. The noise generated in the communication system is a white noise that is distributed evenly in all frequency bands. This white noise causes system errors and lowers reliability. Therefore, in this paper, the existing Gaussian filter, Median filter, Alpha trimmed mean filter, and min/max filter for removing white noise are described and the characteristics and performance of each filter are compared with each other.

  • PDF

Salient Video Frames Sampling Method Using the Mean of Deep Features for Efficient Model Training (효율적인 모델 학습을 위한 심층 특징의 평균값을 활용한 의미 있는 비디오 프레임 추출 기법)

  • Yoon, Hyeok;Kim, Young-Gi;Han, Ji-Hyeong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.318-321
    • /
    • 2021
  • 최근 정보통신의 발달과 함께 인터넷에 접속하는 사용자 수와 그에 따른 비디오 데이터의 전송량이 늘어나는 추세이다. 이렇게 늘어나는 많은 비디오 데이터를 관리하고 분석하기 위해서 최근에는 딥 러닝 기법을 많이 활용하게 된다. 일반적으로 비디오 데이터에 딥 러닝 모델을 학습할 때 컴퓨터 자원의 한계로 인해 전체 비디오 프레임에서 균등한 간격 또는 무작위로 프레임을 선택하는 방법을 많이 사용한다. 하지만 학습에 사용되는 비디오 데이터는 항상 시간 축에 따라 같은 문맥을 담고 있는 Trimmed 비디오라고 가정할 수가 없다. 만약 같지 않은 문맥을 지닌 Untrimmed 비디오에서 균등한 간격 또는 무작위로 프레임을 선택해서 사용하게 된다면 비디오의 범주와 관련이 없는 프레임이 샘플링 될 가능성이 있기 때문에 모델의 학습 및 최적화에 전혀 도움이 되지 않는다. 이를 해결하기 위해 우리는 각 비디오 프레임에서 심층 특징을 추출하여 평균값을 계산하고 이와 각 추출된 심층특징들과 코사인 유사도를 계산해서 얻은 유사도 점수를 바탕으로 Untrimmed 비디오에서 의미 있는 비디오 프레임을 추출하는 기법을 제안한다. 그리고 Untrimmed 비디오로 구성된 데이터셋으로 유명한 ActivityNet 데이터셋에 대해서 대표적인 2가지 프레임 샘플링 방식(균등한 간격, 무작위)과 비교하여 우리가 제안하는 기법이 Untrimmed 비디오에서 효과적으로 비디오의 범주에 해당하는 의미 있는 프레임 추출이 가능함을 보일 것이다. 우리가 실험에 사용한 코드는 https://github.com/titania7777/VideoFrameSampler에서 확인할 수 있다.

  • PDF

Sequential Motion Vector Error Concealment Algorithm for H.264 Video Coding (H.264 표준 동영상 부호화 방식을 위한 순차적 움직임 벡터 오류 은닉 기법)

  • Jeong Jong-woo;Hong Min-Cheol
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.10C
    • /
    • pp.1036-1043
    • /
    • 2005
  • In this paper, we propose a sequential motion vector recovery algorithm for H.264 video coding standard. Motion vectors of H.264 video coding standard cover relatively smaller areas than other standard, since motion estimation of H.264 takes place in the fashion of variable block size. Therefore, the correlation of motion vectors between neighboring blocks increases as the block size of motion estimation is lowered. Under the framework of sequential recovery, we introduce a motion vector recovery using $\alpha$-trimed mean filter. Experimental results show that proposed algorithm is useful in real time video delivery .with nearly comparable or better visual quality than previous approaches such as macro block boundary matching and Lagrage interpolation.

A Comparative Study on Spatial Lattice Data Analysis - A Case Where Outlier Exists - (공간 격자데이터 분석에 대한 우위성 비교 연구 - 이상치가 존재하는 경우 -)

  • Kim, Su-Jung;Choi, Seung-Bae;Kang, Chang-Wan;Cho, Jang-Sik
    • Communications for Statistical Applications and Methods
    • /
    • v.17 no.2
    • /
    • pp.193-204
    • /
    • 2010
  • Recently, researchers of the various fields where the spatial analysis is needed have more interested in spatial statistics. In case of data with spatial correlation, methodologies accounting for the correlation are required and there have been developments in methods for spatial data analysis. Lattice data among spatial data is analyzed with following three procedures: (1) definition of the spatial neighborhood, (2) definition of spatial weight, and (3) the analysis using spatial models. The present paper shows a spatial statistical analysis method superior to a general statistical method in aspect estimation by using the trimmed mean squared error statistic, when we analysis the spatial lattice data that outliers are included. To show validation and usefulness of contents in this paper, we perform a small simulation study and show an empirical example with a criminal data in BusanJin-Gu, Korea.

High Noise Density Median Filter Method for Denoising Cancer Images Using Image Processing Techniques

  • Priyadharsini.M, Suriya;Sathiaseelan, J.G.R
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.11
    • /
    • pp.308-318
    • /
    • 2022
  • Noise is a serious issue. While sending images via electronic communication, Impulse noise, which is created by unsteady voltage, is one of the most common noises in digital communication. During the acquisition process, pictures were collected. It is possible to obtain accurate diagnosis images by removing these noises without affecting the edges and tiny features. The New Average High Noise Density Median Filter. (HNDMF) was proposed in this paper, and it operates in two steps for each pixel. Filter can decide whether the test pixels is degraded by SPN. In the first stage, a detector identifies corrupted pixels, in the second stage, an algorithm replaced by noise free processed pixel, the New average suggested Filter produced for this window. The paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. In this paper the comparison of known image denoising is discussed and a new decision based weighted median filter used to remove impulse noise. Using Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), and Structure Similarity Index Method (SSIM) metrics, the paper examines the performance of Gaussian Filter (GF), Adaptive Median Filter (AMF), and PHDNF. A detailed simulation process is performed to ensure the betterment of the presented model on the Mini-MIAS dataset. The obtained experimental values stated that the HNDMF model has reached to a better performance with the maximum picture quality. images affected by various amounts of pretend salt and paper noise, as well as speckle noise, are calculated and provided as experimental results. According to quality metrics, the HNDMF Method produces a superior result than the existing filter method. Accurately detect and replace salt and pepper noise pixel values with mean and median value in images. The proposed method is to improve the median filter with a significant change.

Usefulness of Full-thickness Skin Graft from Anterolateral Chest wall in the Reconstruction of Facial Defects (안면부 재건에서 전외측 흉벽을 공여부로 하는 전층 피부이식술의 유용성)

  • Yoo, Won-Jae;Lim, So-Young;Pyon, Jai-Kyong;Mun, Goo-Hyun;Bang, Sa-Ik;Oh, Kap-Sung
    • Archives of Plastic Surgery
    • /
    • v.37 no.5
    • /
    • pp.589-594
    • /
    • 2010
  • Purpose: Full thickness skin grafts are useful in the reconstruction of facial skin defects when primary closure is not feasible. Although the supraclavicular area has been considered as the choice of donor site for large facial skin defect, many patients are reluctant to get a neck scar and some patients do not have enough skin to cover the defect owing to the same insult occurred to the neck such as burn accident. We present several cases of reconstruction of facial skin defects by freehand full-thickness skin graft from anterolateral chest wall resulting aesthetically acceptable outcome with lesser donor site morbidity. Methods: Retrospective review was performed from March, 2007 to September, 2009. 15 patients were treated by this method. Mean age was 31.5 years. The ethiology was congenital melanocytic nevus in 7 cases, capillary malformation in 5 cases and burn scar contracture in 3 cases. Mean area of lesion was measured to 67.3 cm2 preoperatively. The lesion was removed beneath the subcutaneous fatty tissue layer. The graft was not trimmed to be thin except defatting procedure. For the larger size of defect, two pieces of grafts were harvested from both anterolateral chest wall in separation and combined by suture. Results: The mean follow up period was 9.7 months. All the grafts survived without any problem except small necrotic areas in 4 cases, which healed spontaneously under conventional dressings in 6 weeks postoperatively. Color match was relatively excellent. There were 2 cases of hyperpigmentation immediately, but all of them disappeared in a few months. Conclusion: In cases of large facial skin defects, the anterolateral chest wall may be a good alternative choice of full-thickness skin graft.