• Title/Summary/Keyword: linear standard model

Search Result 432, Processing Time 0.028 seconds

ROI Study for Diffusion Tensor Image with Partial Volume Effect (부분용적효과를 고려한 확산텐서영상에 대한 관심영역 분석 연구)

  • Choi, Woohyuk;Yoon, Uicheul
    • Journal of Biomedical Engineering Research
    • /
    • v.37 no.2
    • /
    • pp.84-89
    • /
    • 2016
  • In this study, we proposed ameliorated method for region of interest (ROI) study to improve its accuracy using partial volume effect (PVE). PVE which arose in volumetric images when more than one tissue type occur in a voxel, could be used to reduce an amount of gray matter and cerebrospinal fluid within ROI of diffusion tensor image (DTI). In order to define ROIs, individual b0 image was spatially aligned to the JHU DTI-based atlas using linear and non-linear registration (http://cmrm.med.jhmi.edu/). Fractional anisotropy (FA) and mean diffusivity (MD) maps were estimated by fitting diffusion tensor model to each image voxel, and their mean values were computed within each ROI with PVE threshold. Participants of this study consisted of 20 healthy controls, 27 Alzheimer's disease and 27 normal-pressure hydrocephalus patients. The result showed that the mean FA and MD of each ROI were increased and decreased respectively, but standard deviation was significantly decreased when PVE was applied. In conclusion, the proposed method suggested that PVE was indispensable to improve an accuracy of DTI ROI study.

Optimal Disassembly Sequencing with Sequence-Dependent Operation Times Based on the Directed Graph of Assembly States (작업시간이 순서 의존적인 경우 조립상태를 나타내는 유방향그래프를 이용한 최적 제품 분해순서 결정)

  • Kang, Jun-Gyu;Lee, Dong-Ho;Xirouchakis, Paul;Lambert, A.J.D.
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.28 no.3
    • /
    • pp.264-273
    • /
    • 2002
  • This paper focuses on disassembly sequencing, which is the problem of determining the optimum disassembly level and the corresponding disassembly sequence for a product at its end-of-life with the objective of maximizing the overall profit. In particular, sequence-dependent operation times, which frequently occur in practice due to tool-changeover, part reorientation, etc, are considered in the parallel disassembly environment. To represent the problem, a modified directed graph of assembly states is suggested as an extension of the existing extended process graph. Based on the directed graph, the problem is transformed into the shortest path problem and formulated as a linear programming model that can be solved straightforwardly with standard techniques. A case study on a photocopier was done and the results are reported.

Proposed New Evaluation Method of the Site Coefficients Considering the Effects of the Structure-Soil Interaction (구조물-지반 상호작용 영향을 고려한 새로운 지반계수 평가방법에 대한 제안)

  • Kim, Yong-Seok
    • Proceedings of the Earthquake Engineering Society of Korea Conference
    • /
    • 2006.03a
    • /
    • pp.327-336
    • /
    • 2006
  • Site coefficients in IBC and KBC codes have some limits to predict the rational seismic responses of a structure, because they consider only the effect of the soil amplification without the effects of the structure-soil interaction. In this study, upper and lower limits of site coefficients are estimated through the pseudo 3-D elastic seismic response analyses of structures built on linear or nonlinear soil layers considering the structure-soil interaction effects. Soil characteristics of site classes of A, B, and C were assumed to be linear, and those of site classes of D and E were done to be nonlinear and the Ramberg-Osgood model was used to evaluate shear modulus and damping ratio of a soil layer depending on the shear wave velocity of a soil layer. Seismic analyses were performed with 12 weak or moderate earthquake records, scaled the peak acceleration to 0.1g or 0.2g and deconvoluted as earthquake records at the bedrock 30m beneath the outcrop. With the study results of the elastic seismic response analyses of structures, new standard response spectrum and upper and lower limits of the site coefficients of Fa and Fv at the short period range and the period of 1 second are suggested Including the structure-soil interaction effects.

  • PDF

The clustering of critical points in the evolving cosmic web

  • Shim, Junsup;Codis, Sandrine;Pichon, Christophe;Pogosyan, Dmitri;Cadiou, Corentin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.47.2-47.2
    • /
    • 2021
  • Focusing on both small separations and baryonic acoustic oscillation scales, the cosmic evolution of the clustering properties of peak, void, wall, and filament-type critical points is measured using two-point correlation functions in ΛCDM dark matter simulations as a function of their relative rarity. A qualitative comparison to the corresponding theory for Gaussian random fields allows us to understand the following observed features: (i) the appearance of an exclusion zone at small separation, whose size depends both on rarity and signature (i.e. the number of negative eigenvalues) of the critical points involved; (ii) the amplification of the baryonic acoustic oscillation bump with rarity and its reversal for cross-correlations involving negatively biased critical points; (iii) the orientation-dependent small-separation divergence of the cross-correlations of peaks and filaments (respectively voids and walls) that reflects the relative loci of such points in the filament's (respectively wall's) eigenframe. The (cross-) correlations involving the most non-linear critical points (peaks, voids) display significant variation with redshift, while those involving less non-linear critical points seem mostly insensitive to redshift evolution, which should prove advantageous to model. The ratios of distances to the maxima of the peak-to-wall and peak-to-void over that of the peak-to-filament cross-correlation are ~2-√~2 and ~3-√~3WJ, respectively, which could be interpreted as the cosmic crystal being on average close to a cubic lattice. The insensitivity to redshift evolution suggests that the absolute and relative clustering of critical points could become a topologically robust alternative to standard clustering techniques when analysing upcoming surveys such as Euclid or Large Synoptic Survey Telescope (LSST).

  • PDF

Model Evaluations Analysis of Nonpoint Source Pollution Reduction in a Green Infrastructure regarding Urban stormwater (도시 호우 유출에 관한 그린인프라의 비점오염원 저감 모델 평가 분석)

  • Jeon, Seol;Kim, Siyeon;Lee, Moonyoung;Um, Myoung-Jin;Jung, Kichul;Park, Daeryong
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.393-393
    • /
    • 2021
  • 도시화는 도시 호우 유출 발생으로 인한 수질 악화를 초래했고 문제를 해결하기 위해 본 연구에서는 보다 정확한 설계를 위해 그린인프라(Green Infrastructure, GI)의 구조적 특성과 수문학적인 특성을 이용해 어떤 인자들이 설계에 필요한지 상관관계를 통해 분석하였다. GI의 종류 중 저류지와 저류연못의 총부유사량(Total Suspended Solids, TSS)와 총인 (Total Phosphorous, TP)의 유입수, 유출수, 비점오염원 농도, 수문학적인 특성 그리고 GI의 구조적 특성을 Ordinary Least Squares regression(OLS)과 Multi Linear Regression(MLR) 방법을 적용하였다. GI의 구조적인 특성은 한 BMP마다 달라지지 않으나 호우사상의 데이터 개수에 의한 편향이 있을 수 있다. 이런 문제를 해결하기 위해 일정한 범위를 가지고 무작위로 데이터를 추출하는 방법과 이상치를 제외하는 방법을 사용하여 모델에 적용하였다. 이러한 OLS와 MLR 모델들의 정확도를 PBIAS(Percent Bias), NSE(Nash-Sutcliffe efficiency), RSR(RMSE-observations standard deviation ratio)을 통해 분석할 수 있다. 연구 결과 유입수의 비점오염원의 농도뿐만 아니라 수문학적 특성과 GI의 구조적 특성이 함께 들어갈 시 더 좋은 상관관계를 가지고 있음을 알 수 있다. 저류지가 저류연못보다 모델의 성능평가 면에서 좋은 값을 가지고 있지만 특성별 상관관계는 저류연못이 더 뚜렷한 결과를 보여준다.

  • PDF

EPB-TBM performance prediction using statistical and neural intelligence methods

  • Ghodrat Barzegari;Esmaeil Sedghi;Ata Allah Nadiri
    • Geomechanics and Engineering
    • /
    • v.37 no.3
    • /
    • pp.197-211
    • /
    • 2024
  • This research studies the effect of geotechnical factors on EPB-TBM performance parameters. The modeling was performed using simple and multivariate linear regression methods, artificial neural networks (ANNs), and Sugeno fuzzy logic (SFL) algorithm. In ANN, 80% of the data were randomly allocated to training and 20% to network testing. Meanwhile, in the SFL algorithm, 75% of the data were used for training and 25% for testing. The coefficient of determination (R2) obtained between the observed and estimated values in this model for the thrust force and cutterhead torque was 0.19 and 0.52, respectively. The results showed that the SFL outperformed the other models in predicting the target parameters. In this method, the R2 obtained between observed and predicted values for thrust force and cutterhead torque is 0.73 and 0.63, respectively. The sensitivity analysis results show that the internal friction angle (φ) and standard penetration number (SPT) have the greatest impact on thrust force. Also, earth pressure and overburden thickness have the highest effect on cutterhead torque.

Model Between Lead and ZPP Concentration of Workers Exposed to Lead (직업적으로 납에 노출된 근로자들의 혈액중 납과 ZPP농도와의 관계)

  • Park, Dong-Wook;Paik, Nam-Won;Choi, Byung-Soon;Kim, Tae-Gyun;Lee, Kwang-Yong;Oh, Se-Min;Ahn, Kyu-Dong
    • Journal of Korean Society of Occupational and Environmental Hygiene
    • /
    • v.6 no.1
    • /
    • pp.88-96
    • /
    • 1996
  • This study was conducted to establish model between lead and ZPP concentration in blood of workers exposed to lead. Workers employed in secondary smelting manufacturing industry showed $85.1{\mu}g/dl$ of blood lead level, exceeding $60{\mu}g/dl$, the Criteria for Removal defined by Occupational Safety and Health Act of Korea. Average blood lead level of workers in the battery manufacturing industry was $51.3{\mu}g/dl$, locating between $40{\mu}g/dl$ and $60{\mu}g/dl$, the Criteria for Requiring Medical Removal. Blood lead level of in the litharge and radiator manufacturing industry was below $40{\mu}g/dl$, the Criteria Requiring Temporary Medical Removal. Blood lead levels of workers by industry were Significantly different(p<0.05). 50(21 %) showed blood lead levels above $60{\mu}g/dl$, the Criteria for Removal and 66(27.7 %) showed blood lead levels between the Criteria for Requiring Medical Removal, $40-60{\mu}g/dl$. Thus, approximately 50 percent of workers indicated blood lead levels above $40{\mu}g/dl$, the Criteria Requiring Temporary Medical Removal and should receive medical examination and consultation including biological monitoring. Average ZPP level of workers employed in the secondary smelting industry was $186.2{\mu}g/dl$, exceeding above $150{\mu}g/dl$, the Criteria for Removal. Seventy seven of all workers(32.3 %) showed ZPP level above $100-150{\mu}g/dl$, the Criteria for Requiring Medical Removal. The most appropriate model for predicting ZPP in blood was log-linear regression model. Log linear regression models between lead and ZPP concentrations in blood was Log ZPP(${\mu}g/dl$) = -0.2340 + 1.2270 Log Pb-B(${\mu}g/dl$)(standard error of estimate: 0,089, ${\gamma}^2=0.4456$, n=238, P=0.0001), Blood-in-lead explained 44.56 % of the variance in log(ZPP in blood).

  • PDF

Method of a Multi-mode Low Rate Speech Coder Using a Transient Coding at the Rate of 2.4 kbit/s (전이구간 부호화를 이용한 2.4 kbit/s 다중모드 음성 부호화 방법)

  • Ahn Yeong-uk;Kim Jong-hak;Lee Insung;Kwon Oh-ju;Bae Mun-Kwan
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.131-142
    • /
    • 2005
  • The low rate speech coders under 4 kbit/s are based on sinusoidal transform coding (STC) or multiband excitation (MBE). Since the harmonic coders are not efficient to reconstruct the transient segments of speech signals such as onsets, offsets, non-periodic signals, etc, the coders do not provide a natural speech quality. This paper proposes method of a efficient transient model :d a multi-mode low rate coder at 2.4 kbit/s that uses harmonic model for the voiced speech, stochastic model for the unvoiced speech and a model using aperiodic pulse location tracking (APPT) for the transient segments, respectively. The APPT utilizes the harmonic model. The proposed method uses different models depending on the characteristics of LPC residual signals. In addition, it can combine synthesized excitation in CELP coding at time domain with that in harmonic coding at frequency domain efficiently. The proposed coder shows a better speech quality than 2.4 kbit/s version of the mixed excitation linear prediction (MELP) coder that is a U.S. Federal Standard for speech coder.

Improvement of generalization of linear model through data augmentation based on Central Limit Theorem (데이터 증가를 통한 선형 모델의 일반화 성능 개량 (중심극한정리를 기반으로))

  • Hwang, Doohwan
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.19-31
    • /
    • 2022
  • In Machine learning, we usually divide the entire data into training data and test data, train the model using training data, and use test data to determine the accuracy and generalization performance of the model. In the case of models with low generalization performance, the prediction accuracy of newly data is significantly reduced, and the model is said to be overfit. This study is about a method of generating training data based on central limit theorem and combining it with existed training data to increase normality and using this data to train models and increase generalization performance. To this, data were generated using sample mean and standard deviation for each feature of the data by utilizing the characteristic of central limit theorem, and new training data was constructed by combining them with existed training data. To determine the degree of increase in normality, the Kolmogorov-Smirnov normality test was conducted, and it was confirmed that the new training data showed increased normality compared to the existed data. Generalization performance was measured through differences in prediction accuracy for training data and test data. As a result of measuring the degree of increase in generalization performance by applying this to K-Nearest Neighbors (KNN), Logistic Regression, and Linear Discriminant Analysis (LDA), it was confirmed that generalization performance was improved for KNN, a non-parametric technique, and LDA, which assumes normality between model building.

Largest Coding Unit Level Rate Control Algorithm for Hierarchical Video Coding in HEVC

  • Yoon, Yeo-Jin;Kim, Hoon;Baek, Seung-Jin;Ko, Sung-Jea
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.3
    • /
    • pp.171-181
    • /
    • 2012
  • In the new video coding standard, called high efficiency video coding (HEVC), the coding unit (CU) is adopted as a basic unit of a coded block structure. Therefore, the rate control (RC) methods of H.264/AVC, whose basic unit is a macroblock, cannot be applied directly to HEVC. This paper proposes the largest CU (LCU) level RC method for hierarchical video coding in a HEVC. In the proposed method, the effective bit allocation is performed first based on the hierarchical structure, and the quantization parameters (QP) are then determined using the Cauchy density based rate-quantization (RQ) model. A novel method based on the linear rate model is introduced to estimate the parameters of the Cauchy density based RQ model precisely. The experimental results show that the proposed RC method not only controls the bitrate accurately, but also generates a constant number of bits per second with less degradation of the decoded picture quality than with the fixed QP coding and latest RC method for HEVC.

  • PDF