• Title/Summary/Keyword: inverse coefficient

Search Result 317, Processing Time 0.025 seconds

Hardware Architecture and its Design of Real-Time Video Compression Processor for Motion JPEG2000 (Motion JPEG2000을 위한 실시간 비디오 압축 프로세서의 하드웨어 구조 및 설계)

  • 서영호;김동욱
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.53 no.1
    • /
    • pp.1-9
    • /
    • 2004
  • In this paper, we proposed a hardware(H/W) structure which can compress and recontruct the input image in real time operation and implemented it into a FPGA platform using VHDL(VHSIC Hardware Description Language). All the image processing element to process both compression and reconstruction in a FPGA were considered each of them was mapped into a H/W with the efficient structure for FPGA. We used the DWT(discrete wavelet transform) which transforms the data from spatial domain to the frequency domain, because use considered the motion JPEG2000 as the application. The implemented H/W is separated to both the data path part and the control part. The data path part consisted of the image processing blocks and the data processing blocks. The image processing blocks consisted of the DWT Kernel for the filtering by DWT, Quantizer/Huffman Encoder, Inverse Adder/Buffer for adding the low frequency coefficient to the high frequency one in the inverse DWT operation, and Huffman Decoder. Also there existed the interface blocks for communicating with the external application environments and the timing blocks for buffering between the internal blocks. The global operations of the designed H/W are the image compression and the reconstruction, and it is operated by the unit or a field synchronized with the A/D converter. The implemented H/W used the 54%(12943) LAB(Logic Array Block) and 9%(28352) ESB(Embedded System Block) in the APEX20KC EP20K600CB652-7 FPGA chip of ALTERA, and stably operated in the 70MHz clock frequency. So we verified the real time operation. that is. processing 60 fields/sec(30 frames/sec).

Estimation of Fine-Scale Daily Temperature with 30 m-Resolution Using PRISM (PRISM을 이용한 30 m 해상도의 상세 일별 기온 추정)

  • Ahn, Joong-Bae;Hur, Jina;Lim, A-Young
    • Atmosphere
    • /
    • v.24 no.1
    • /
    • pp.101-110
    • /
    • 2014
  • This study estimates and evaluates the daily January temperature from 2003 to 2012 with 30 m-resolution over South Korea, using a modified Parameter-elevation Regression on Independent Slopes Model (K-PRISM). Several factors in K-PRISM are also adjusted to 30 m grid spacing and daily time scales. The performance of K-PRISM is validated in terms of bias, root mean square error (RMSE), and correlation coefficient (Corr), and is then compared with that of inverse distance weighting (IDW) and hypsometric methods (HYPS). In estimating the temperature over Jeju island, K-PRISM has the lowest bias (-0.85) and RMSE (1.22), and the highest Corr (0.79) among the three methods. It captures the daily variation of observation, but tends to underestimate due to a high-discrepancy in mean altitudes between the observation stations and grid points of the 30 m topography. The temperature over South Korea derived from K-PRISM represents a detailed spatial pattern of the observed temperature, but generally tends to underestimate with a mean bias of -0.45. In bias terms, the estimation ability of K-PRISM differs between grid points, implying that care should be taken when dealing with poor skill area. The study results demonstrate that K-PRISM can reasonably estimate 30 m-resolution temperature over South Korea, and reflect topographically diverse signals with detailed structure features.

The estimation of thermal diffusivity using NPE method (비선형 매개변수 추정법을 이용한 열확산계수의 측정)

  • 임동주;배신철
    • Transactions of the Korean Society of Mechanical Engineers
    • /
    • v.14 no.6
    • /
    • pp.1679-1688
    • /
    • 1990
  • The method of nonlinear parameter estimation(NPE), which is a statistical and an inverse method, is used to estimate the thermal diffusivity of the porous insulation material. In order to apply the NPE method for measuring the thermal diffusivity, and algorithm for programing suitable to IBM personal computer is established, and is studied the statistical treatment of experimental data and theory of estimation. The experimental data obtained by discrete measurement using a constant heat flux technique are used to find the boundary conditions, initial conditions, and the thermal diffusivity, and then the final values are compared with the values obtained by some different methods. The results are presented as follows:(1) NPE method is used to establish the estimation of the thermal diffusivity and compared results with experimental output shows, that this method can be applicable to define the thermal diffusivity without considering hear flux types. (2) Because of all of the temperatures obtained by the discrete measurement on each steps of time are used to estimate the thermal diffusivity. Although some error in the temperature measurements of temperature are included in estimating process, its influences on the final value are minimzed in NPE method. (3) NPE method can reduce the experimental time including the time of data collecting in a few minutes and can take smaller specimen compared with steady state method. If the tube-type furnace is used, also the adjusting time of surrounding temperature can be reduced.

Schur Algorithm for Sub-bottom Profiling (해저지층 탐사를 위한 Schur 알고리즘)

  • Bae, Jinho;Lee, Chong Hyun;Kim, Hoeyong;Cho, Jung-Hong
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.9
    • /
    • pp.156-163
    • /
    • 2013
  • In this paper, we propose an algorithm for estimating media characteristics of sea water and subbottom multi-layers. The proposed algorithm for estimating reflection coefficients, uses a transmitted signal and reflected signal obtained from multiple layers of various shape and structure, and the algorithm is called Schur algorithm. The algorithm is efficient in estimating the reflection coefficients since it finds solution by converting the given inverse scattering problem into matrix factorization. To verify the proposed algorithm, we generate a transmit signal and reflected signal obtained from lattice filter model for sea water and subbottom of multi-level non-homogeneous layers, and then find that the proposed algorithm can estimate reflection coefficients efficiently.

A Method for the Increasing Efficiency of the Watershed Based Image Segmentation using Haar Wavelet Transform (Haar 웨이블릿 변환을 사용한 Watershed 기반 영상 분할의 효율성 증대를 위한 기법)

  • 김종배;김항준
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.2
    • /
    • pp.1-10
    • /
    • 2003
  • This paper presents an efficient method for image segmentation based on a multiresolution application of a wavelet transform and watershed segmentation algorithm. The procedure toward complete segmentation consists of four steps: pyramid representation, image segmentation, region merging and region projection. First, pyramid representation creates multiresolution images using a wavelet transform. Second, image segmentation segments the lowest-resolution image of the pyramid using a watershed segmentation algorithm. Third, region merging merges the segmented regions using the third-order moment values of the wavelet coefficients. Finally, the segmented low-resolution image with label is projected into a full-resolution image (original image) by inverse wavelet transform. Experimental results of the presented method can be applied to the segmentation of noise or degraded images as well as reduce over-segmentation.

Image Retrieval Using Spacial Color Correlation and Local Texture Characteristics (칼라의 공간적 상관관계 및 국부 질감 특성을 이용한 영상검색)

  • Sung, Joong-Ki;Chun, Young-Deok;Kim, Nam-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.103-114
    • /
    • 2005
  • This paper presents a content-based image retrieval (CBIR) method using the combination of color and texture features. As a color feature, a color autocorrelogram is chosen which is extracted from the hue and saturation components of a color image. As a texture feature, BDIP(block difference of inverse probabilities) and BVLC(block variation of local correlation coefficients) are chosen which are extracted from the value component. When the features are extracted, the color autocorrelogram and the BVLC are simplified in consideration of their calculation complexity. After the feature extraction, vector components of these features are efficiently quantized in consideration of their storage space. Experiments for Corel and VisTex DBs show that the proposed retrieval method yields 9.5% maximum precision gain over the method using only the color autucorrelogram and 4.0% over the BDIP-BVLC. Also, the proposed method yields 12.6%, 14.6%, and 27.9% maximum precision gains over the methods using wavelet moments, CSD, and color histogram, respectively.

Application of a Statistical Interpolation Method to Correct Extreme Values in High-Resolution Gridded Climate Variables (고해상도 격자 기후자료 내 이상 기후변수 수정을 위한 통계적 보간법 적용)

  • Jeong, Yeo min;Eum, Hyung-Il
    • Journal of Climate Change Research
    • /
    • v.6 no.4
    • /
    • pp.331-344
    • /
    • 2015
  • A long-term gridded historical data at 3 km spatial resolution has been generated for practical regional applications such as hydrologic modelling. However, overly high or low values have been found at some grid points where complex topography or sparse observational network exist. In this study, the Inverse Distance Weighting (IDW) method was applied to properly smooth the overly predicted values of Improved GIS-based Regression Model (IGISRM), called the IDW-IGISRM grid data, at the same resolution for daily precipitation, maximum temperature and minimum temperature from 2001 to 2010 over South Korea. We tested various effective distances in the IDW method to detect an optimal distance that provides the highest performance. IDW-IGISRM was compared with IGISRM to evaluate the effectiveness of IDW-IGISRM with regard to spatial patterns, and quantitative performance metrics over 243 AWS observational points and four selected stations showing the largest biases. Regarding the spatial pattern, IDW-IGISRM reduced irrational overly predicted values, i. e. producing smoother spatial maps that IGISRM for all variables. In addition, all quantitative performance metrics were improved by IDW-IGISRM; correlation coefficient (CC), Index Of Agreement (IOA) increase up to 11.2% and 2.0%, respectively. Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) were also reduced up to 5.4% and 15.2% respectively. At the selected four stations, this study demonstrated that the improvement was more considerable. These results indicate that IDW-IGISRM can improve the predictive performance of IGISRM, consequently providing more reliable high-resolution gridded data for assessment, adaptation, and vulnerability studies of climate change impacts.

Interfacial shear strength test by a hemi-spherical microbond specimen of carbon fiber and epoxy resin (탄소섬유/에폭시의 반구형 미소접합 시험편에 대한 계면강도 평가)

  • Park, Joo-Eon;Gu, Ja-Uk;Kang, Soo-Keun;Choi, Nak-Sam
    • Composites Research
    • /
    • v.21 no.4
    • /
    • pp.15-21
    • /
    • 2008
  • Interfacial shear strength between epoxy and carbon fiber was analyzed utilizing a hemi-spherical microbond specimens adhered onto single carbon fiber. The hemi-spherical microbond specimen showed high regression coefficient and small standard deviation in the measurement of interfacial strength as compared with a droplet and an inverse hemi-spherical one. This seemed to be caused by the reduced meniscus effects and the reduced stress concentration In the region contacting with a pin-hole loading device. Finite element analysis showed that the stress distributions along the fiber/matrix interface in the hemi-spherical specimen had a stable shear stress distribution along the interface without any stress mode change. The experimental data was also different according to the kinds of loading device such as the microvise-tip and the pin-holed plate.

Application of Effective Regularization to Gradient-based Seismic Full Waveform Inversion using Selective Smoothing Coefficients (선택적 평활화 계수를 이용한 그래디언트기반 탄성파 완전파형역산의 효과적인 정규화 기법 적용)

  • Park, Yunhui;Pyun, Sukjoon
    • Geophysics and Geophysical Exploration
    • /
    • v.16 no.4
    • /
    • pp.211-216
    • /
    • 2013
  • In general, smoothing filters regularize functions by reducing differences between adjacent values. The smoothing filters, therefore, can regularize inverse solutions and produce more accurate subsurface structure when we apply it to full waveform inversion. If we apply a smoothing filter with a constant coefficient to subsurface image or velocity model, it will make layer interfaces and fault structures vague because it does not consider any information of geologic structures and variations of velocity. In this study, we develop a selective smoothing regularization technique, which adapts smoothing coefficients according to inversion iteration, to solve the weakness of smoothing regularization with a constant coefficient. First, we determine appropriate frequencies and analyze the corresponding wavenumber coverage. Then, we define effective maximum wavenumber as 99 percentile of wavenumber spectrum in order to choose smoothing coefficients which can effectively limit the wavenumber coverage. By adapting the chosen smoothing coefficients according to the iteration, we can implement multi-scale full waveform inversion while inverting multi-frequency components simultaneously. Through the successful inversion example on a salt model with high-contrast velocity structures, we can note that our method effectively regularizes the inverse solution. We also verify that our scheme is applicable to field data through the numerical example to the synthetic data containing random noise.

Evaluation of multi-objective PSO algorithm for SWAT auto-calibration (다목적 PSO 알고리즘을 활용한 SWAT의 자동보정 적용성 평가)

  • Jang, Won Jin;Lee, Yong Gwan;Kim, Seong Joon
    • Journal of Korea Water Resources Association
    • /
    • v.51 no.9
    • /
    • pp.803-812
    • /
    • 2018
  • The purpose of this study is to develop Particle Swarm Optimization (PSO) automatic calibration algorithm with multi-objective functions by Python, and to evaluate the applicability by applying the algorithm to the Soil and Water Assessment Tool (SWAT) watershed modeling. The study area is the upstream watershed of Gongdo observation station of Anseongcheon watershed ($364.8km^2$) and the daily observed streamflow data from 2000 to 2015 were used. The PSO automatic algorithm calibrated SWAT streamflow by coefficient of determination ($R^2$), root mean square error (RMSE), Nash-Sutcliffe efficiency ($NSE_Q$), and especially including $NSE_{INQ}$ (Inverse Q) for lateral, base flow calibration. The results between automatic and manual calibration showed $R^2$ of 0.64 and 0.55, RMSE of 0.59 and 0.58, $NSE_Q$ of 0.78 and 0.75, and $NSE_{INQ}$ of 0.45 and 0.09, respectively. The PSO automatic calibration algorithm showed an improvement especially the streamflow recession phase and remedied the limitation of manual calibration by including new parameter (RCHRG_DP) and considering parameters range.